Test Report: Docker_Linux_crio 12230

                    
                      1c76ff5cea01605c2d985c010644edf1e689d34b:2021-08-13:19970
                    
                

Test fail (10/250)

Order failed test Duration
30 TestAddons/parallel/Ingress 312.68
155 TestMultiNode/serial/PingHostFrom2Pods 3.75
194 TestPreload 152.69
196 TestScheduledStopUnix 70.93
200 TestRunningBinaryUpgrade 151.8
201 TestStoppedBinaryUpgrade 172.98
251 TestStartStop/group/no-preload/serial/Stop 1656.08
282 TestStartStop/group/embed-certs/serial/Pause 6.51
287 TestNetworkPlugins/group/calico/Start 532.37
301 TestNetworkPlugins/group/enable-default-cni/DNS 290.87
x
+
TestAddons/parallel/Ingress (312.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:343: "ingress-nginx-admission-create-bpfw7" [f20227a7-9164-4b1d-b47b-56c20f47803b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 4.145695ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210812235522-676638 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210812235522-676638 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [ffb96dc3-333f-4c8e-8016-8957d3fd9e17] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [ffb96dc3-333f-4c8e-8016-8957d3fd9e17] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 18.087780527s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235522-676638 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210812235522-676638 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.691622611s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:224: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210812235522-676638 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235522-676638 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-20210812235522-676638 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.769846677s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:262: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235522-676638 addons disable ingress --alsologtostderr -v=1
addons_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p addons-20210812235522-676638 addons disable ingress --alsologtostderr -v=1: (28.754607512s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210812235522-676638
helpers_test.go:236: (dbg) docker inspect addons-20210812235522-676638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298",
	        "Created": "2021-08-12T23:55:24.572201739Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 678567,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-12T23:55:25.091871746Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298/hosts",
	        "LogPath": "/var/lib/docker/containers/7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298/7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298-json.log",
	        "Name": "/addons-20210812235522-676638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210812235522-676638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210812235522-676638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f054f711acbd31b1b01f71c25ba713415b3ffe9ebdcdff4b20b8b30d117b2aaf-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f054f711acbd31b1b01f71c25ba713415b3ffe9ebdcdff4b20b8b30d117b2aaf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f054f711acbd31b1b01f71c25ba713415b3ffe9ebdcdff4b20b8b30d117b2aaf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f054f711acbd31b1b01f71c25ba713415b3ffe9ebdcdff4b20b8b30d117b2aaf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210812235522-676638",
	                "Source": "/var/lib/docker/volumes/addons-20210812235522-676638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210812235522-676638",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210812235522-676638",
	                "name.minikube.sigs.k8s.io": "addons-20210812235522-676638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3964435ceee4e39a22cd3ba30519132c2b8e3586baf6a4f933774f0339b31cb2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33258"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33257"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33254"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33256"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33255"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3964435ceee4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210812235522-676638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7dd277d6b392"
	                    ],
	                    "NetworkID": "6a01913f5cfa9d35ab1eed236f41b6bde7f2bebad0ab19a7ce5726bd91a13796",
	                    "EndpointID": "290ecc73cc56284bd34e72b6f693a44b50621c7abd614540432da7e9b79dd891",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20210812235522-676638 -n addons-20210812235522-676638
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235522-676638 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p addons-20210812235522-676638 logs -n 25: (1.040154978s)
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                 Args                  |                Profile                |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                 | no-preload-20210812223644-31519       | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:55:08 UTC | Thu, 12 Aug 2021 23:55:11 UTC |
	| delete  | -p                                    | download-only-20210812235441-676638   | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:55:11 UTC | Thu, 12 Aug 2021 23:55:11 UTC |
	|         | download-only-20210812235441-676638   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-only-20210812235441-676638   | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:55:11 UTC | Thu, 12 Aug 2021 23:55:12 UTC |
	|         | download-only-20210812235441-676638   |                                       |         |         |                               |                               |
	| delete  | -p                                    | download-docker-20210812235512-676638 | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:55:22 UTC | Thu, 12 Aug 2021 23:55:22 UTC |
	|         | download-docker-20210812235512-676638 |                                       |         |         |                               |                               |
	| start   | -p                                    | addons-20210812235522-676638          | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:55:22 UTC | Thu, 12 Aug 2021 23:57:46 UTC |
	|         | addons-20210812235522-676638          |                                       |         |         |                               |                               |
	|         | --wait=true --memory=4000             |                                       |         |         |                               |                               |
	|         | --alsologtostderr                     |                                       |         |         |                               |                               |
	|         | --addons=registry                     |                                       |         |         |                               |                               |
	|         | --addons=metrics-server               |                                       |         |         |                               |                               |
	|         | --addons=olm                          |                                       |         |         |                               |                               |
	|         | --addons=volumesnapshots              |                                       |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver          |                                       |         |         |                               |                               |
	|         | --driver=docker                       |                                       |         |         |                               |                               |
	|         | --container-runtime=crio              |                                       |         |         |                               |                               |
	|         | --addons=ingress                      |                                       |         |         |                               |                               |
	|         | --addons=helm-tiller                  |                                       |         |         |                               |                               |
	| -p      | addons-20210812235522-676638          | addons-20210812235522-676638          | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:58:00 UTC | Thu, 12 Aug 2021 23:58:10 UTC |
	|         | addons enable gcp-auth --force        |                                       |         |         |                               |                               |
	| -p      | addons-20210812235522-676638          | addons-20210812235522-676638          | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:58:32 UTC | Thu, 12 Aug 2021 23:58:32 UTC |
	|         | ip                                    |                                       |         |         |                               |                               |
	| -p      | addons-20210812235522-676638          | addons-20210812235522-676638          | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:58:32 UTC | Thu, 12 Aug 2021 23:58:33 UTC |
	|         | addons disable registry               |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210812235522-676638          | addons-20210812235522-676638          | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:58:38 UTC | Thu, 12 Aug 2021 23:58:39 UTC |
	|         | addons disable metrics-server         |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210812235522-676638          | addons-20210812235522-676638          | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:58:49 UTC | Thu, 12 Aug 2021 23:58:50 UTC |
	|         | addons disable helm-tiller            |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210812235522-676638          | addons-20210812235522-676638          | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:58:45 UTC | Thu, 12 Aug 2021 23:58:58 UTC |
	|         | addons disable gcp-auth               |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210812235522-676638          | addons-20210812235522-676638          | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:59:08 UTC | Thu, 12 Aug 2021 23:59:15 UTC |
	|         | addons disable                        |                                       |         |         |                               |                               |
	|         | csi-hostpath-driver                   |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210812235522-676638          | addons-20210812235522-676638          | jenkins | v1.22.0 | Thu, 12 Aug 2021 23:59:15 UTC | Thu, 12 Aug 2021 23:59:16 UTC |
	|         | addons disable volumesnapshots        |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	| -p      | addons-20210812235522-676638          | addons-20210812235522-676638          | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:03:32 UTC | Fri, 13 Aug 2021 00:04:00 UTC |
	|         | addons disable ingress                |                                       |         |         |                               |                               |
	|         | --alsologtostderr -v=1                |                                       |         |         |                               |                               |
	|---------|---------------------------------------|---------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 23:55:22
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 23:55:22.771744  677922 out.go:298] Setting OutFile to fd 1 ...
	I0812 23:55:22.771837  677922 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:55:22.771841  677922 out.go:311] Setting ErrFile to fd 2...
	I0812 23:55:22.771844  677922 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:55:22.771962  677922 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0812 23:55:22.772282  677922 out.go:305] Setting JSON to false
	I0812 23:55:22.810526  677922 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":13084,"bootTime":1628799438,"procs":206,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0812 23:55:22.810659  677922 start.go:121] virtualization: kvm guest
	I0812 23:55:22.813744  677922 out.go:177] * [addons-20210812235522-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0812 23:55:22.815517  677922 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 23:55:22.813942  677922 notify.go:169] Checking for updates...
	I0812 23:55:22.817335  677922 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0812 23:55:22.818959  677922 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0812 23:55:22.820548  677922 out.go:177]   - MINIKUBE_LOCATION=12230
	I0812 23:55:22.820813  677922 driver.go:335] Setting default libvirt URI to qemu:///system
	I0812 23:55:22.870113  677922 docker.go:132] docker version: linux-19.03.15
	I0812 23:55:22.870253  677922 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 23:55:22.954860  677922 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-12 23:55:22.906495411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0812 23:55:22.954980  677922 docker.go:244] overlay module found
	I0812 23:55:22.957330  677922 out.go:177] * Using the docker driver based on user configuration
	I0812 23:55:22.957371  677922 start.go:278] selected driver: docker
	I0812 23:55:22.957380  677922 start.go:751] validating driver "docker" against <nil>
	I0812 23:55:22.957409  677922 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0812 23:55:22.957501  677922 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0812 23:55:22.957526  677922 out.go:242] ! Your cgroup does not allow setting memory.
	I0812 23:55:22.959239  677922 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0812 23:55:22.960183  677922 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 23:55:23.043174  677922 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-12 23:55:22.996272607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0812 23:55:23.043301  677922 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0812 23:55:23.043448  677922 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0812 23:55:23.043470  677922 cni.go:93] Creating CNI manager for ""
	I0812 23:55:23.043476  677922 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0812 23:55:23.043485  677922 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0812 23:55:23.043493  677922 start_flags.go:277] config:
	{Name:addons-20210812235522-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210812235522-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:55:23.046091  677922 out.go:177] * Starting control plane node addons-20210812235522-676638 in cluster addons-20210812235522-676638
	I0812 23:55:23.046165  677922 cache.go:117] Beginning downloading kic base image for docker with crio
	I0812 23:55:23.048033  677922 out.go:177] * Pulling base image ...
	I0812 23:55:23.048061  677922 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0812 23:55:23.048101  677922 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0812 23:55:23.048123  677922 cache.go:56] Caching tarball of preloaded images
	I0812 23:55:23.048153  677922 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0812 23:55:23.048342  677922 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0812 23:55:23.048359  677922 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0812 23:55:23.048652  677922 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/config.json ...
	I0812 23:55:23.048680  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/config.json: {Name:mk36547774a390e6a1b0d8e154a31de6ac44fd63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:55:23.135304  677922 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0812 23:55:23.135338  677922 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0812 23:55:23.135358  677922 cache.go:205] Successfully downloaded all kic artifacts
	I0812 23:55:23.135403  677922 start.go:313] acquiring machines lock for addons-20210812235522-676638: {Name:mk8ddf55fbc6abf675de88457c72f12d866f3386 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0812 23:55:23.135555  677922 start.go:317] acquired machines lock for "addons-20210812235522-676638" in 130.637µs
	I0812 23:55:23.135589  677922 start.go:89] Provisioning new machine with config: &{Name:addons-20210812235522-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210812235522-676638 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0812 23:55:23.135701  677922 start.go:126] createHost starting for "" (driver="docker")
	I0812 23:55:23.138321  677922 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0812 23:55:23.138628  677922 start.go:160] libmachine.API.Create for "addons-20210812235522-676638" (driver="docker")
	I0812 23:55:23.138659  677922 client.go:168] LocalClient.Create starting
	I0812 23:55:23.138790  677922 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0812 23:55:23.268339  677922 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0812 23:55:23.427740  677922 cli_runner.go:115] Run: docker network inspect addons-20210812235522-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0812 23:55:23.464877  677922 cli_runner.go:162] docker network inspect addons-20210812235522-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0812 23:55:23.464960  677922 network_create.go:255] running [docker network inspect addons-20210812235522-676638] to gather additional debugging logs...
	I0812 23:55:23.464985  677922 cli_runner.go:115] Run: docker network inspect addons-20210812235522-676638
	W0812 23:55:23.502501  677922 cli_runner.go:162] docker network inspect addons-20210812235522-676638 returned with exit code 1
	I0812 23:55:23.502534  677922 network_create.go:258] error running [docker network inspect addons-20210812235522-676638]: docker network inspect addons-20210812235522-676638: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210812235522-676638
	I0812 23:55:23.502552  677922 network_create.go:260] output of [docker network inspect addons-20210812235522-676638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210812235522-676638
	
	** /stderr **
	I0812 23:55:23.502601  677922 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0812 23:55:23.541268  677922 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000101d8] misses:0}
	I0812 23:55:23.541323  677922 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0812 23:55:23.541342  677922 network_create.go:106] attempt to create docker network addons-20210812235522-676638 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0812 23:55:23.541397  677922 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210812235522-676638
	I0812 23:55:23.613360  677922 network_create.go:90] docker network addons-20210812235522-676638 192.168.49.0/24 created
	I0812 23:55:23.613403  677922 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210812235522-676638" container
	I0812 23:55:23.613490  677922 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0812 23:55:23.652402  677922 cli_runner.go:115] Run: docker volume create addons-20210812235522-676638 --label name.minikube.sigs.k8s.io=addons-20210812235522-676638 --label created_by.minikube.sigs.k8s.io=true
	I0812 23:55:23.689402  677922 oci.go:102] Successfully created a docker volume addons-20210812235522-676638
	I0812 23:55:23.689484  677922 cli_runner.go:115] Run: docker run --rm --name addons-20210812235522-676638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210812235522-676638 --entrypoint /usr/bin/test -v addons-20210812235522-676638:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0812 23:55:24.439307  677922 oci.go:106] Successfully prepared a docker volume addons-20210812235522-676638
	W0812 23:55:24.439373  677922 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0812 23:55:24.439381  677922 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0812 23:55:24.439439  677922 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0812 23:55:24.439447  677922 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0812 23:55:24.439468  677922 kic.go:179] Starting extracting preloaded images to volume ...
	I0812 23:55:24.439527  677922 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210812235522-676638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0812 23:55:24.527932  677922 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210812235522-676638 --name addons-20210812235522-676638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210812235522-676638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210812235522-676638 --network addons-20210812235522-676638 --ip 192.168.49.2 --volume addons-20210812235522-676638:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0812 23:55:25.101942  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Running}}
	I0812 23:55:25.149171  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:55:25.196458  677922 cli_runner.go:115] Run: docker exec addons-20210812235522-676638 stat /var/lib/dpkg/alternatives/iptables
	I0812 23:55:25.332287  677922 oci.go:278] the created container "addons-20210812235522-676638" has a running status.
	I0812 23:55:25.332349  677922 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa...
	I0812 23:55:25.447801  677922 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0812 23:55:25.855824  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:55:25.895294  677922 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0812 23:55:25.895320  677922 kic_runner.go:115] Args: [docker exec --privileged addons-20210812235522-676638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0812 23:55:28.252681  677922 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-20210812235522-676638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (3.813108208s)
	I0812 23:55:28.252718  677922 kic.go:188] duration metric: took 3.813248 seconds to extract preloaded images to volume
	I0812 23:55:28.252790  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:55:28.291806  677922 machine.go:88] provisioning docker machine ...
	I0812 23:55:28.291851  677922 ubuntu.go:169] provisioning hostname "addons-20210812235522-676638"
	I0812 23:55:28.291915  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:55:28.329634  677922 main.go:130] libmachine: Using SSH client type: native
	I0812 23:55:28.329821  677922 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33258 <nil> <nil>}
	I0812 23:55:28.329836  677922 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210812235522-676638 && echo "addons-20210812235522-676638" | sudo tee /etc/hostname
	I0812 23:55:28.473886  677922 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210812235522-676638
	
	I0812 23:55:28.473957  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:55:28.511843  677922 main.go:130] libmachine: Using SSH client type: native
	I0812 23:55:28.512003  677922 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33258 <nil> <nil>}
	I0812 23:55:28.512024  677922 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210812235522-676638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210812235522-676638/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210812235522-676638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0812 23:55:28.625268  677922 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0812 23:55:28.625319  677922 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0812 23:55:28.625361  677922 ubuntu.go:177] setting up certificates
	I0812 23:55:28.625372  677922 provision.go:83] configureAuth start
	I0812 23:55:28.625432  677922 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210812235522-676638
	I0812 23:55:28.663788  677922 provision.go:137] copyHostCerts
	I0812 23:55:28.663862  677922 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0812 23:55:28.663970  677922 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1082 bytes)
	I0812 23:55:28.664026  677922 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0812 23:55:28.664069  677922 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.addons-20210812235522-676638 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210812235522-676638]
	I0812 23:55:28.903256  677922 provision.go:171] copyRemoteCerts
	I0812 23:55:28.903328  677922 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0812 23:55:28.903367  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:55:28.945631  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:55:29.028685  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0812 23:55:29.045826  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0812 23:55:29.062835  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0812 23:55:29.079240  677922 provision.go:86] duration metric: configureAuth took 453.849734ms
	I0812 23:55:29.079267  677922 ubuntu.go:193] setting minikube options for container-runtime
	I0812 23:55:29.079533  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:55:29.118218  677922 main.go:130] libmachine: Using SSH client type: native
	I0812 23:55:29.118392  677922 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33258 <nil> <nil>}
	I0812 23:55:29.118417  677922 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0812 23:55:29.470517  677922 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0812 23:55:29.470551  677922 machine.go:91] provisioned docker machine in 1.178720315s
	I0812 23:55:29.470560  677922 client.go:171] LocalClient.Create took 6.331895952s
	I0812 23:55:29.470576  677922 start.go:168] duration metric: libmachine.API.Create for "addons-20210812235522-676638" took 6.331946427s
	I0812 23:55:29.470586  677922 start.go:267] post-start starting for "addons-20210812235522-676638" (driver="docker")
	I0812 23:55:29.470592  677922 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0812 23:55:29.470654  677922 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0812 23:55:29.470696  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:55:29.510119  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:55:29.597342  677922 ssh_runner.go:149] Run: cat /etc/os-release
	I0812 23:55:29.600343  677922 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0812 23:55:29.600368  677922 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0812 23:55:29.600383  677922 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0812 23:55:29.600392  677922 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0812 23:55:29.600405  677922 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0812 23:55:29.600473  677922 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0812 23:55:29.600506  677922 start.go:270] post-start completed in 129.91281ms
	I0812 23:55:29.600825  677922 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210812235522-676638
	I0812 23:55:29.638981  677922 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/config.json ...
	I0812 23:55:29.639383  677922 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0812 23:55:29.639438  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:55:29.677586  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:55:29.758484  677922 start.go:129] duration metric: createHost completed in 6.622767317s
	I0812 23:55:29.758514  677922 start.go:80] releasing machines lock for "addons-20210812235522-676638", held for 6.622940743s
	I0812 23:55:29.758604  677922 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210812235522-676638
	I0812 23:55:29.796664  677922 ssh_runner.go:149] Run: systemctl --version
	I0812 23:55:29.796690  677922 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0812 23:55:29.796735  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:55:29.796746  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:55:29.836751  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:55:29.837024  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:55:29.955029  677922 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0812 23:55:29.974015  677922 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0812 23:55:29.982832  677922 docker.go:153] disabling docker service ...
	I0812 23:55:29.982895  677922 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0812 23:55:29.992217  677922 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0812 23:55:30.001201  677922 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0812 23:55:30.066514  677922 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0812 23:55:30.133559  677922 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0812 23:55:30.143166  677922 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0812 23:55:30.156620  677922 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0812 23:55:30.164969  677922 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0812 23:55:30.164999  677922 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0812 23:55:30.173177  677922 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0812 23:55:30.179537  677922 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0812 23:55:30.179595  677922 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0812 23:55:30.186488  677922 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0812 23:55:30.192491  677922 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0812 23:55:30.253389  677922 ssh_runner.go:149] Run: sudo systemctl start crio
	I0812 23:55:30.263317  677922 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0812 23:55:30.263399  677922 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0812 23:55:30.266818  677922 start.go:417] Will wait 60s for crictl version
	I0812 23:55:30.266891  677922 ssh_runner.go:149] Run: sudo crictl version
	I0812 23:55:30.296738  677922 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0812 23:55:30.296841  677922 ssh_runner.go:149] Run: crio --version
	I0812 23:55:30.361652  677922 ssh_runner.go:149] Run: crio --version
	I0812 23:55:30.428008  677922 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0812 23:55:30.428119  677922 cli_runner.go:115] Run: docker network inspect addons-20210812235522-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0812 23:55:30.466380  677922 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0812 23:55:30.469996  677922 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 23:55:30.479774  677922 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0812 23:55:30.479838  677922 ssh_runner.go:149] Run: sudo crictl images --output json
	I0812 23:55:30.524482  677922 crio.go:424] all images are preloaded for cri-o runtime.
	I0812 23:55:30.524503  677922 crio.go:333] Images already preloaded, skipping extraction
	I0812 23:55:30.524547  677922 ssh_runner.go:149] Run: sudo crictl images --output json
	I0812 23:55:30.548214  677922 crio.go:424] all images are preloaded for cri-o runtime.
	I0812 23:55:30.548246  677922 cache_images.go:74] Images are preloaded, skipping loading
	I0812 23:55:30.548321  677922 ssh_runner.go:149] Run: crio config
	I0812 23:55:30.616692  677922 cni.go:93] Creating CNI manager for ""
	I0812 23:55:30.616721  677922 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0812 23:55:30.616731  677922 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0812 23:55:30.616747  677922 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210812235522-676638 NodeName:addons-20210812235522-676638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0812 23:55:30.616895  677922 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "addons-20210812235522-676638"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0812 23:55:30.616996  677922 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-20210812235522-676638 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210812235522-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0812 23:55:30.617060  677922 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0812 23:55:30.624091  677922 binaries.go:44] Found k8s binaries, skipping transfer
	I0812 23:55:30.624153  677922 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0812 23:55:30.630772  677922 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0812 23:55:30.643187  677922 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0812 23:55:30.655536  677922 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0812 23:55:30.667610  677922 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0812 23:55:30.670570  677922 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0812 23:55:30.679403  677922 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638 for IP: 192.168.49.2
	I0812 23:55:30.679455  677922 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0812 23:55:30.964652  677922 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt ...
	I0812 23:55:30.964697  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt: {Name:mk2d27d2ee7872e2389e0bc31a58b049cfba69b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:55:30.964938  677922 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key ...
	I0812 23:55:30.964959  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key: {Name:mkdfef5c40cc434378ef7e8b822aaa4de07e98d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:55:30.965076  677922 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0812 23:55:31.024877  677922 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt ...
	I0812 23:55:31.024915  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt: {Name:mk334da5083d3dcde26bad8f7ffc84b79ad0b176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:55:31.025137  677922 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key ...
	I0812 23:55:31.025155  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key: {Name:mk66b024687771a7946245484f3af62c3fe35642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:55:31.025320  677922 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.key
	I0812 23:55:31.025334  677922 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt with IP's: []
	I0812 23:55:31.337294  677922 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt ...
	I0812 23:55:31.337339  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: {Name:mkbf6db866e8d085128c57f9e331dd96242a2afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:55:31.337576  677922 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.key ...
	I0812 23:55:31.337592  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.key: {Name:mk6a646e2d73da68ace644afbb261510e69a9b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:55:31.337681  677922 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.key.dd3b5fb2
	I0812 23:55:31.337692  677922 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0812 23:55:31.520748  677922 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.crt.dd3b5fb2 ...
	I0812 23:55:31.520786  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.crt.dd3b5fb2: {Name:mk3c68e3c911a194dce91ef8ac0b159d69a71ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:55:31.520989  677922 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.key.dd3b5fb2 ...
	I0812 23:55:31.521002  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.key.dd3b5fb2: {Name:mkeb069be93d271eeae5c006d8293f8bba738c6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:55:31.521080  677922 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.crt
	I0812 23:55:31.521140  677922 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.key
	I0812 23:55:31.521189  677922 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/proxy-client.key
	I0812 23:55:31.521199  677922 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/proxy-client.crt with IP's: []
	I0812 23:55:31.682123  677922 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/proxy-client.crt ...
	I0812 23:55:31.682161  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/proxy-client.crt: {Name:mk4ce84efdf46f74ebc2adaa854023d44d8431ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:55:31.682376  677922 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/proxy-client.key ...
	I0812 23:55:31.682389  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/proxy-client.key: {Name:mk6475da4e668f9e8aad081c975ec600c6889572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:55:31.682569  677922 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0812 23:55:31.682649  677922 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1082 bytes)
	I0812 23:55:31.682685  677922 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0812 23:55:31.682709  677922 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
	I0812 23:55:31.683723  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0812 23:55:31.702804  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0812 23:55:31.721352  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0812 23:55:31.739705  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0812 23:55:31.757489  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0812 23:55:31.774592  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0812 23:55:31.791302  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0812 23:55:31.807802  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0812 23:55:31.824271  677922 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0812 23:55:31.841165  677922 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0812 23:55:31.854135  677922 ssh_runner.go:149] Run: openssl version
	I0812 23:55:31.859127  677922 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0812 23:55:31.867172  677922 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0812 23:55:31.870453  677922 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:55 /usr/share/ca-certificates/minikubeCA.pem
	I0812 23:55:31.870519  677922 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0812 23:55:31.875508  677922 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0812 23:55:31.883835  677922 kubeadm.go:390] StartCluster: {Name:addons-20210812235522-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210812235522-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:55:31.883939  677922 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0812 23:55:31.883988  677922 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0812 23:55:31.909722  677922 cri.go:76] found id: ""
	I0812 23:55:31.909806  677922 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0812 23:55:31.916993  677922 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0812 23:55:31.923767  677922 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0812 23:55:31.923820  677922 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0812 23:55:31.930439  677922 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0812 23:55:31.930477  677922 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0812 23:55:32.234773  677922 out.go:204]   - Generating certificates and keys ...
	I0812 23:55:35.050847  677922 out.go:204]   - Booting up control plane ...
	I0812 23:55:51.599764  677922 out.go:204]   - Configuring RBAC rules ...
	I0812 23:55:52.018645  677922 cni.go:93] Creating CNI manager for ""
	I0812 23:55:52.018681  677922 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0812 23:55:52.020497  677922 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0812 23:55:52.020563  677922 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0812 23:55:52.024192  677922 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0812 23:55:52.024217  677922 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0812 23:55:52.036876  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0812 23:55:52.453852  677922 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0812 23:55:52.453953  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:52.453955  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=addons-20210812235522-676638 minikube.k8s.io/updated_at=2021_08_12T23_55_52_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:52.470486  677922 ops.go:34] apiserver oom_adj: -16
	I0812 23:55:52.521759  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:53.117531  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:53.617468  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:54.117116  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:54.616967  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:55.117000  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:55.617800  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:56.117769  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:56.617637  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:57.117770  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:57.617655  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:58.116828  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:58.617595  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:59.117705  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:55:59.617773  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:56:01.700038  677922 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.082215486s)
	I0812 23:56:02.117757  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:56:03.879721  677922 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.761920964s)
	I0812 23:56:04.116989  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:56:05.250223  677922 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.133192345s)
	I0812 23:56:05.617698  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:56:06.117694  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:56:06.617010  677922 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0812 23:56:06.684255  677922 kubeadm.go:985] duration metric: took 14.230368115s to wait for elevateKubeSystemPrivileges.
	I0812 23:56:06.684288  677922 kubeadm.go:392] StartCluster complete in 34.800462169s
	I0812 23:56:06.684314  677922 settings.go:142] acquiring lock: {Name:mk8e048b414f35bb1583f1d1b3e929d90c1bd9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:56:06.684478  677922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0812 23:56:06.685042  677922 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk7dda383efa2f679c68affe6e459fff93248137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:56:07.204098  677922 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210812235522-676638" rescaled to 1
	I0812 23:56:07.204184  677922 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0812 23:56:07.204205  677922 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0812 23:56:07.206780  677922 out.go:177] * Verifying Kubernetes components...
	I0812 23:56:07.204236  677922 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress helm-tiller]
	I0812 23:56:07.206891  677922 addons.go:59] Setting volumesnapshots=true in profile "addons-20210812235522-676638"
	I0812 23:56:07.206902  677922 addons.go:59] Setting ingress=true in profile "addons-20210812235522-676638"
	I0812 23:56:07.206930  677922 addons.go:59] Setting storage-provisioner=true in profile "addons-20210812235522-676638"
	I0812 23:56:07.206963  677922 addons.go:59] Setting helm-tiller=true in profile "addons-20210812235522-676638"
	I0812 23:56:07.206996  677922 addons.go:135] Setting addon storage-provisioner=true in "addons-20210812235522-676638"
	W0812 23:56:07.207006  677922 addons.go:147] addon storage-provisioner should already be in state true
	I0812 23:56:07.207013  677922 addons.go:135] Setting addon helm-tiller=true in "addons-20210812235522-676638"
	I0812 23:56:07.207035  677922 host.go:66] Checking if "addons-20210812235522-676638" exists ...
	I0812 23:56:07.207067  677922 host.go:66] Checking if "addons-20210812235522-676638" exists ...
	I0812 23:56:07.207653  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:56:07.207686  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:56:07.206910  677922 addons.go:135] Setting addon volumesnapshots=true in "addons-20210812235522-676638"
	I0812 23:56:07.207860  677922 host.go:66] Checking if "addons-20210812235522-676638" exists ...
	I0812 23:56:07.206914  677922 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0812 23:56:07.206919  677922 addons.go:59] Setting registry=true in profile "addons-20210812235522-676638"
	I0812 23:56:07.208073  677922 addons.go:135] Setting addon registry=true in "addons-20210812235522-676638"
	I0812 23:56:07.208132  677922 host.go:66] Checking if "addons-20210812235522-676638" exists ...
	I0812 23:56:07.208372  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:56:07.206912  677922 addons.go:59] Setting default-storageclass=true in profile "addons-20210812235522-676638"
	I0812 23:56:07.208597  677922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210812235522-676638"
	I0812 23:56:07.208653  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:56:07.206930  677922 addons.go:59] Setting metrics-server=true in profile "addons-20210812235522-676638"
	I0812 23:56:07.208892  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:56:07.208901  677922 addons.go:135] Setting addon metrics-server=true in "addons-20210812235522-676638"
	I0812 23:56:07.208934  677922 host.go:66] Checking if "addons-20210812235522-676638" exists ...
	I0812 23:56:07.206921  677922 addons.go:59] Setting olm=true in profile "addons-20210812235522-676638"
	I0812 23:56:07.208997  677922 addons.go:135] Setting addon olm=true in "addons-20210812235522-676638"
	I0812 23:56:07.209044  677922 host.go:66] Checking if "addons-20210812235522-676638" exists ...
	I0812 23:56:07.206940  677922 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210812235522-676638"
	I0812 23:56:07.209116  677922 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210812235522-676638"
	I0812 23:56:07.209161  677922 host.go:66] Checking if "addons-20210812235522-676638" exists ...
	I0812 23:56:07.209484  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:56:07.209572  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:56:07.206948  677922 addons.go:135] Setting addon ingress=true in "addons-20210812235522-676638"
	I0812 23:56:07.209685  677922 host.go:66] Checking if "addons-20210812235522-676638" exists ...
	I0812 23:56:07.209691  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:56:07.210122  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:56:07.284590  677922 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0812 23:56:07.284769  677922 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 23:56:07.284789  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0812 23:56:07.284858  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:56:07.286930  677922 out.go:177]   - Using image gcr.io/kubernetes-helm/tiller:v2.16.12
	I0812 23:56:07.287060  677922 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0812 23:56:07.287072  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2433 bytes)
	I0812 23:56:07.287133  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:56:07.306402  677922 out.go:177]   - Using image registry:2.7.1
	I0812 23:56:07.308310  677922 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0812 23:56:07.308446  677922 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0812 23:56:07.308464  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0812 23:56:07.308536  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:56:07.309997  677922 addons.go:135] Setting addon default-storageclass=true in "addons-20210812235522-676638"
	W0812 23:56:07.310179  677922 addons.go:147] addon default-storageclass should already be in state true
	I0812 23:56:07.310219  677922 host.go:66] Checking if "addons-20210812235522-676638" exists ...
	I0812 23:56:07.310784  677922 cli_runner.go:115] Run: docker container inspect addons-20210812235522-676638 --format={{.State.Status}}
	I0812 23:56:07.322807  677922 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0812 23:56:07.322896  677922 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0812 23:56:07.322917  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0812 23:56:07.322987  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:56:07.325053  677922 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0812 23:56:07.327082  677922 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0812 23:56:07.348090  677922 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0812 23:56:07.350616  677922 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0812 23:56:07.352302  677922 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0812 23:56:07.353925  677922 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0812 23:56:07.355514  677922 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0812 23:56:07.354126  677922 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0812 23:56:07.355608  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0812 23:56:07.357356  677922 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0812 23:56:07.355689  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:56:07.359062  677922 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0812 23:56:07.360638  677922 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0812 23:56:07.362114  677922 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0812 23:56:07.361666  677922 node_ready.go:35] waiting up to 6m0s for node "addons-20210812235522-676638" to be "Ready" ...
	I0812 23:56:07.363777  677922 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0812 23:56:07.361835  677922 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0812 23:56:07.359227  677922 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0812 23:56:07.365491  677922 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0812 23:56:07.365500  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0812 23:56:07.365571  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:56:07.367076  677922 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0812 23:56:07.366697  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:56:07.368812  677922 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0812 23:56:07.368917  677922 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0812 23:56:07.368932  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0812 23:56:07.368999  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:56:07.366948  677922 node_ready.go:49] node "addons-20210812235522-676638" has status "Ready":"True"
	I0812 23:56:07.369117  677922 node_ready.go:38] duration metric: took 6.960014ms waiting for node "addons-20210812235522-676638" to be "Ready" ...
	I0812 23:56:07.369135  677922 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 23:56:07.369165  677922 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0812 23:56:07.369181  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0812 23:56:07.369255  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:56:07.386771  677922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace to be "Ready" ...
	I0812 23:56:07.407380  677922 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0812 23:56:07.407408  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0812 23:56:07.407473  677922 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210812235522-676638
	I0812 23:56:07.410510  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:56:07.424071  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:56:07.461064  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:56:07.467617  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:56:07.467794  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:56:07.474779  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:56:07.477599  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:56:07.485344  677922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33258 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210812235522-676638/id_rsa Username:docker}
	I0812 23:56:07.614881  677922 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0812 23:56:07.614915  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0812 23:56:07.690036  677922 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0812 23:56:07.690072  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0812 23:56:07.711795  677922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0812 23:56:07.713300  677922 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0812 23:56:07.713327  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0812 23:56:07.716030  677922 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0812 23:56:07.718081  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0812 23:56:07.790893  677922 addons.go:275] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0812 23:56:07.790922  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0812 23:56:07.798123  677922 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0812 23:56:07.798158  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0812 23:56:07.815101  677922 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0812 23:56:07.815132  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0812 23:56:07.816429  677922 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0812 23:56:07.816453  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0812 23:56:07.891416  677922 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0812 23:56:07.891448  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0812 23:56:07.898531  677922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0812 23:56:07.907787  677922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0812 23:56:07.909978  677922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0812 23:56:08.001796  677922 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0812 23:56:08.001894  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0812 23:56:08.007342  677922 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0812 23:56:08.007372  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0812 23:56:08.012255  677922 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0812 23:56:08.012282  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0812 23:56:08.095907  677922 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 23:56:08.095941  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0812 23:56:08.097341  677922 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0812 23:56:08.097366  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0812 23:56:08.190408  677922 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0812 23:56:08.190502  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0812 23:56:08.206527  677922 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0812 23:56:08.206558  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0812 23:56:08.209474  677922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0812 23:56:08.211506  677922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0812 23:56:08.403197  677922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0812 23:56:08.405374  677922 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0812 23:56:08.405404  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0812 23:56:08.495693  677922 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0812 23:56:08.495724  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0812 23:56:08.694887  677922 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.329363305s)
	I0812 23:56:08.694922  677922 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0812 23:56:08.793101  677922 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 23:56:08.793184  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0812 23:56:08.911266  677922 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0812 23:56:08.911303  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0812 23:56:09.104010  677922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 23:56:09.204277  677922 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0812 23:56:09.204309  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0812 23:56:09.408882  677922 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0812 23:56:09.408916  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0812 23:56:09.602892  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:09.705429  677922 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0812 23:56:09.705460  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0812 23:56:09.901847  677922 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0812 23:56:09.901875  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0812 23:56:10.011066  677922 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0812 23:56:10.011095  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0812 23:56:10.191745  677922 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.29317065s)
	I0812 23:56:10.191794  677922 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.479972151s)
	I0812 23:56:10.210199  677922 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0812 23:56:10.210229  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0812 23:56:10.389885  677922 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0812 23:56:10.389918  677922 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0812 23:56:10.507181  677922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0812 23:56:10.707360  677922 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.797331897s)
	I0812 23:56:10.707397  677922 addons.go:313] Verifying addon registry=true in "addons-20210812235522-676638"
	I0812 23:56:10.709335  677922 out.go:177] * Verifying registry addon...
	I0812 23:56:10.707742  677922 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (2.799908712s)
	I0812 23:56:10.712016  677922 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0812 23:56:10.794079  677922 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0812 23:56:10.794163  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:11.001884  677922 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.79232094s)
	I0812 23:56:11.001927  677922 addons.go:313] Verifying addon metrics-server=true in "addons-20210812235522-676638"
	I0812 23:56:11.306653  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:11.905243  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:12.095475  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:12.296693  677922 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (4.085150221s)
	I0812 23:56:12.296734  677922 addons.go:313] Verifying addon ingress=true in "addons-20210812235522-676638"
	I0812 23:56:12.298765  677922 out.go:177] * Verifying ingress addon...
	I0812 23:56:12.301467  677922 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0812 23:56:12.312990  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:12.313592  677922 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0812 23:56:12.313652  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:12.821877  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:12.896393  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:13.391533  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:13.394556  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:13.892049  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:13.910325  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:13.912242  677922 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (5.509002312s)
	W0812 23:56:13.912311  677922 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0812 23:56:13.912340  677922 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0812 23:56:13.912421  677922 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.808365235s)
	W0812 23:56:13.912474  677922 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0812 23:56:13.912496  677922 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0812 23:56:14.188994  677922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0812 23:56:14.273421  677922 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0812 23:56:14.303951  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:14.318194  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:14.505517  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:14.995934  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:15.002188  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:15.304686  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:15.403069  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:15.692554  677922 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.185251692s)
	I0812 23:56:15.692698  677922 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210812235522-676638"
	I0812 23:56:15.697633  677922 out.go:177] * Verifying csi-hostpath-driver addon...
	I0812 23:56:15.700042  677922 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0812 23:56:15.802433  677922 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0812 23:56:15.802460  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:15.803875  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:15.817505  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:16.300758  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:16.308036  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:16.317587  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:16.798745  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:16.807981  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:16.817213  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:16.919051  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:17.299333  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:17.308064  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:17.317080  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:17.519749  677922 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (3.330706383s)
	I0812 23:56:17.519962  677922 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.246459425s)
	I0812 23:56:17.799295  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:17.807054  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:17.817038  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:18.299832  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:18.307463  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:18.317083  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:18.799053  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:18.806966  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:18.817935  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:18.919671  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:19.299123  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:19.307480  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:19.318169  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:19.798899  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:19.806563  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:19.817147  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:20.299074  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:20.306991  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:20.317638  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:20.798406  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:20.808259  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:20.818429  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:21.299010  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:21.307315  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:21.319191  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:21.419992  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:21.799432  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:21.807730  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:21.817348  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:22.298806  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:22.307785  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:22.317032  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:22.799140  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:22.808216  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:22.817637  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:23.299550  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:23.308071  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:23.317764  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:23.799555  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:23.808248  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:23.818107  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:23.919093  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:24.299813  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:24.310787  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:24.317520  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:24.798934  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:24.807379  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:24.818540  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:25.298696  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:25.307922  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:25.317533  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:25.798974  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:25.806965  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:25.817857  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:25.919741  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:26.299393  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:26.307766  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:26.317524  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:26.798985  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:26.806974  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:26.817803  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:27.299005  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:27.307295  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:27.317968  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:27.799594  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:27.808964  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:27.818128  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:28.299553  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:28.307703  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:28.317162  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:28.418660  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:28.798844  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:28.806650  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:28.817276  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:29.299787  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:29.310611  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:29.317063  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:29.799256  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:29.807266  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:29.817354  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:30.299176  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:30.307693  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:30.317867  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:30.799416  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:30.807816  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:30.817463  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:30.919304  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:31.299028  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:31.307290  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:31.317734  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:31.798395  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:31.811845  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:31.817387  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:32.299450  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:32.307796  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:32.317540  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:32.799040  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:32.807861  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:32.817096  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:33.299434  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:33.307684  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:33.317006  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:33.419736  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:33.802792  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:33.807015  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:33.817786  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:34.299067  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:34.307154  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:34.317444  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:34.801877  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:34.807021  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:34.817667  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:35.299033  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:35.307307  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:35.317776  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:35.420438  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:35.798661  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:35.812209  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:35.816961  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:36.299051  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:36.307254  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:36.317792  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:36.802851  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:36.807429  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:36.818153  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:37.308681  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:37.309594  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:37.316876  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:37.798662  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:37.808098  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:37.818288  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:37.919468  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:38.298994  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:38.307066  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:38.317774  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:38.798519  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:38.808362  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:38.818125  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:39.299123  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:39.307419  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:39.318008  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:39.801340  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:39.807331  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:39.818010  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:39.919554  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:40.299356  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:40.307620  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:40.318225  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:40.798721  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:40.807650  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:40.817421  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:41.298700  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:41.308994  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:41.316870  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:41.799315  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:41.808185  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:41.817671  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:41.920035  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:42.300487  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:42.308532  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:42.318460  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:42.800005  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:42.807184  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:42.817536  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:43.311557  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:43.315311  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:43.407450  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:43.802751  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:43.810371  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:43.893724  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:43.995606  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:44.299642  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:44.313811  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:44.395227  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:44.800015  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:44.809142  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:44.892702  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:45.301379  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:45.308491  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:45.317835  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:45.799165  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:45.808103  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:45.817925  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:46.299683  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:46.308746  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:46.318507  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:46.420114  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:46.799659  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:46.808333  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:46.818296  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:47.299294  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:47.308172  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:47.414825  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:47.799661  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:47.809126  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:47.819250  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:48.299273  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:48.308378  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:48.318228  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:48.799741  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:48.812951  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:48.818561  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:48.918925  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:49.299685  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:49.309989  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:49.317590  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:49.800275  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:49.807867  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:49.817008  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:50.299122  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:50.307479  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:50.317883  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:50.798587  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:50.807796  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:50.817140  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:50.919035  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:51.299512  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:51.308163  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:51.317840  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:51.798568  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:51.808615  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:51.818630  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:52.299293  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:52.307969  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:52.318142  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:52.799357  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:52.811992  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:52.817342  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:52.919768  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:53.299851  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:53.308756  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:53.316972  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:53.798608  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:53.808395  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:53.817770  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:54.299944  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:54.308952  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:54.318082  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:54.798850  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:54.808355  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:54.817705  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:55.299321  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:55.307946  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:55.317490  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:55.419588  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:55.799183  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:55.808517  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:55.817965  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:56.298997  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:56.307278  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:56.317957  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:56.799499  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:56.808174  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:56.817460  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:57.298830  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:57.310747  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:57.316627  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:57.494320  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:57.798802  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:57.809725  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:57.818275  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:58.299303  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:58.307839  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:58.317208  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:58.800080  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:58.809604  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:58.818745  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:59.300047  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:59.308301  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:59.317876  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:56:59.495948  677922 pod_ready.go:102] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"False"
	I0812 23:56:59.800444  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:56:59.808419  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:56:59.819021  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:00.299490  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:00.308127  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:00.318178  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:00.419429  677922 pod_ready.go:92] pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace has status "Ready":"True"
	I0812 23:57:00.419462  677922 pod_ready.go:81] duration metric: took 53.032646624s waiting for pod "coredns-558bd4d5db-bdsbb" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:00.419476  677922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-z5ksw" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:00.421571  677922 pod_ready.go:97] error getting pod "coredns-558bd4d5db-z5ksw" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-z5ksw" not found
	I0812 23:57:00.421595  677922 pod_ready.go:81] duration metric: took 2.111625ms waiting for pod "coredns-558bd4d5db-z5ksw" in "kube-system" namespace to be "Ready" ...
	E0812 23:57:00.421605  677922 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-z5ksw" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-z5ksw" not found
	I0812 23:57:00.421611  677922 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210812235522-676638" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:00.425541  677922 pod_ready.go:92] pod "etcd-addons-20210812235522-676638" in "kube-system" namespace has status "Ready":"True"
	I0812 23:57:00.425559  677922 pod_ready.go:81] duration metric: took 3.941104ms waiting for pod "etcd-addons-20210812235522-676638" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:00.425572  677922 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210812235522-676638" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:00.429754  677922 pod_ready.go:92] pod "kube-apiserver-addons-20210812235522-676638" in "kube-system" namespace has status "Ready":"True"
	I0812 23:57:00.429774  677922 pod_ready.go:81] duration metric: took 4.1953ms waiting for pod "kube-apiserver-addons-20210812235522-676638" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:00.429785  677922 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210812235522-676638" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:00.436226  677922 pod_ready.go:92] pod "kube-controller-manager-addons-20210812235522-676638" in "kube-system" namespace has status "Ready":"True"
	I0812 23:57:00.436247  677922 pod_ready.go:81] duration metric: took 6.452116ms waiting for pod "kube-controller-manager-addons-20210812235522-676638" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:00.436258  677922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nswlp" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:00.618803  677922 pod_ready.go:92] pod "kube-proxy-nswlp" in "kube-system" namespace has status "Ready":"True"
	I0812 23:57:00.618826  677922 pod_ready.go:81] duration metric: took 182.560867ms waiting for pod "kube-proxy-nswlp" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:00.618840  677922 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210812235522-676638" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:00.798752  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:00.807886  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:00.817638  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:01.019920  677922 pod_ready.go:92] pod "kube-scheduler-addons-20210812235522-676638" in "kube-system" namespace has status "Ready":"True"
	I0812 23:57:01.019940  677922 pod_ready.go:81] duration metric: took 401.091985ms waiting for pod "kube-scheduler-addons-20210812235522-676638" in "kube-system" namespace to be "Ready" ...
	I0812 23:57:01.019947  677922 pod_ready.go:38] duration metric: took 53.650795656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0812 23:57:01.019965  677922 api_server.go:50] waiting for apiserver process to appear ...
	I0812 23:57:01.020007  677922 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0812 23:57:01.091403  677922 api_server.go:70] duration metric: took 53.887174576s to wait for apiserver process to appear ...
	I0812 23:57:01.091434  677922 api_server.go:86] waiting for apiserver healthz status ...
	I0812 23:57:01.091451  677922 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0812 23:57:01.097014  677922 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0812 23:57:01.098096  677922 api_server.go:139] control plane version: v1.21.3
	I0812 23:57:01.098118  677922 api_server.go:129] duration metric: took 6.676689ms to wait for apiserver health ...
	I0812 23:57:01.098128  677922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0812 23:57:01.223080  677922 system_pods.go:59] 19 kube-system pods found
	I0812 23:57:01.223115  677922 system_pods.go:61] "coredns-558bd4d5db-bdsbb" [92486d99-2cf1-402b-a503-ab96c9b618b9] Running
	I0812 23:57:01.223119  677922 system_pods.go:61] "csi-hostpath-attacher-0" [3559dd5a-735c-4e11-951f-056c7911b65a] Running
	I0812 23:57:01.223126  677922 system_pods.go:61] "csi-hostpath-provisioner-0" [dad9da8e-97e0-4196-a963-8315cf836b9f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0812 23:57:01.223131  677922 system_pods.go:61] "csi-hostpath-resizer-0" [d3448f40-25c4-4c01-b213-1e1bb8940f00] Running
	I0812 23:57:01.223140  677922 system_pods.go:61] "csi-hostpath-snapshotter-0" [45961444-6ead-46db-b268-54db67e14986] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0812 23:57:01.223151  677922 system_pods.go:61] "csi-hostpathplugin-0" [1a0f82e1-e68d-48ed-bc3c-366375c5fed6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0812 23:57:01.223158  677922 system_pods.go:61] "etcd-addons-20210812235522-676638" [3ddcb71a-e0de-4843-92ea-3714e803e2f6] Running
	I0812 23:57:01.223166  677922 system_pods.go:61] "kindnet-8n78k" [d47c9523-38af-4b33-bd69-4c258d68f315] Running
	I0812 23:57:01.223172  677922 system_pods.go:61] "kube-apiserver-addons-20210812235522-676638" [d2fa6b1f-a56f-425d-a3ce-3776f592166e] Running
	I0812 23:57:01.223188  677922 system_pods.go:61] "kube-controller-manager-addons-20210812235522-676638" [b3059e21-ed6f-495d-aa22-b6e12830b849] Running
	I0812 23:57:01.223192  677922 system_pods.go:61] "kube-proxy-nswlp" [cfd1dbc6-f8dd-48dd-83d9-e2c5a7513861] Running
	I0812 23:57:01.223196  677922 system_pods.go:61] "kube-scheduler-addons-20210812235522-676638" [ca0ea89b-51f2-4472-96db-6c258c5518e5] Running
	I0812 23:57:01.223202  677922 system_pods.go:61] "metrics-server-77c99ccb96-hh2jp" [443aa982-36d7-45da-ac72-9d6fa4915ee6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 23:57:01.223210  677922 system_pods.go:61] "registry-8jjjh" [a711703f-7521-471b-8ea3-0734e481ba15] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0812 23:57:01.223217  677922 system_pods.go:61] "registry-proxy-pd29p" [3569e920-9b07-4802-8b75-8e33d9e8fa45] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0812 23:57:01.223228  677922 system_pods.go:61] "snapshot-controller-989f9ddc8-v79vp" [7d16a57c-5506-4429-bb5b-31117c4cb5cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 23:57:01.223238  677922 system_pods.go:61] "snapshot-controller-989f9ddc8-xcj4l" [8a373dee-af65-4ba3-b39d-61ab0ac638d4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 23:57:01.223244  677922 system_pods.go:61] "storage-provisioner" [58517ae2-0ca0-4973-87df-7cecd7fbfd80] Running
	I0812 23:57:01.223255  677922 system_pods.go:61] "tiller-deploy-768d69497-zrqlj" [bfb7f0b2-ab6d-45af-912b-f2e37c637853] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0812 23:57:01.223272  677922 system_pods.go:74] duration metric: took 125.136434ms to wait for pod list to return data ...
	I0812 23:57:01.223287  677922 default_sa.go:34] waiting for default service account to be created ...
	I0812 23:57:01.299118  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:01.308293  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:01.317950  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:01.419529  677922 default_sa.go:45] found service account: "default"
	I0812 23:57:01.419621  677922 default_sa.go:55] duration metric: took 196.323407ms for default service account to be created ...
	I0812 23:57:01.419653  677922 system_pods.go:116] waiting for k8s-apps to be running ...
	I0812 23:57:01.623806  677922 system_pods.go:86] 19 kube-system pods found
	I0812 23:57:01.623841  677922 system_pods.go:89] "coredns-558bd4d5db-bdsbb" [92486d99-2cf1-402b-a503-ab96c9b618b9] Running
	I0812 23:57:01.623850  677922 system_pods.go:89] "csi-hostpath-attacher-0" [3559dd5a-735c-4e11-951f-056c7911b65a] Running
	I0812 23:57:01.623860  677922 system_pods.go:89] "csi-hostpath-provisioner-0" [dad9da8e-97e0-4196-a963-8315cf836b9f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-provisioner])
	I0812 23:57:01.623866  677922 system_pods.go:89] "csi-hostpath-resizer-0" [d3448f40-25c4-4c01-b213-1e1bb8940f00] Running
	I0812 23:57:01.623877  677922 system_pods.go:89] "csi-hostpath-snapshotter-0" [45961444-6ead-46db-b268-54db67e14986] Pending / Ready:ContainersNotReady (containers with unready status: [csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-snapshotter])
	I0812 23:57:01.623886  677922 system_pods.go:89] "csi-hostpathplugin-0" [1a0f82e1-e68d-48ed-bc3c-366375c5fed6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-agent csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe])
	I0812 23:57:01.623894  677922 system_pods.go:89] "etcd-addons-20210812235522-676638" [3ddcb71a-e0de-4843-92ea-3714e803e2f6] Running
	I0812 23:57:01.623902  677922 system_pods.go:89] "kindnet-8n78k" [d47c9523-38af-4b33-bd69-4c258d68f315] Running
	I0812 23:57:01.623910  677922 system_pods.go:89] "kube-apiserver-addons-20210812235522-676638" [d2fa6b1f-a56f-425d-a3ce-3776f592166e] Running
	I0812 23:57:01.623920  677922 system_pods.go:89] "kube-controller-manager-addons-20210812235522-676638" [b3059e21-ed6f-495d-aa22-b6e12830b849] Running
	I0812 23:57:01.623936  677922 system_pods.go:89] "kube-proxy-nswlp" [cfd1dbc6-f8dd-48dd-83d9-e2c5a7513861] Running
	I0812 23:57:01.623943  677922 system_pods.go:89] "kube-scheduler-addons-20210812235522-676638" [ca0ea89b-51f2-4472-96db-6c258c5518e5] Running
	I0812 23:57:01.623952  677922 system_pods.go:89] "metrics-server-77c99ccb96-hh2jp" [443aa982-36d7-45da-ac72-9d6fa4915ee6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0812 23:57:01.623967  677922 system_pods.go:89] "registry-8jjjh" [a711703f-7521-471b-8ea3-0734e481ba15] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0812 23:57:01.623983  677922 system_pods.go:89] "registry-proxy-pd29p" [3569e920-9b07-4802-8b75-8e33d9e8fa45] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0812 23:57:01.623999  677922 system_pods.go:89] "snapshot-controller-989f9ddc8-v79vp" [7d16a57c-5506-4429-bb5b-31117c4cb5cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 23:57:01.624012  677922 system_pods.go:89] "snapshot-controller-989f9ddc8-xcj4l" [8a373dee-af65-4ba3-b39d-61ab0ac638d4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0812 23:57:01.624022  677922 system_pods.go:89] "storage-provisioner" [58517ae2-0ca0-4973-87df-7cecd7fbfd80] Running
	I0812 23:57:01.624035  677922 system_pods.go:89] "tiller-deploy-768d69497-zrqlj" [bfb7f0b2-ab6d-45af-912b-f2e37c637853] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0812 23:57:01.624048  677922 system_pods.go:126] duration metric: took 204.381393ms to wait for k8s-apps to be running ...
	I0812 23:57:01.624060  677922 system_svc.go:44] waiting for kubelet service to be running ....
	I0812 23:57:01.624109  677922 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0812 23:57:01.705179  677922 system_svc.go:56] duration metric: took 81.106597ms WaitForService to wait for kubelet.
	I0812 23:57:01.705214  677922 kubeadm.go:547] duration metric: took 54.500994419s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0812 23:57:01.705312  677922 node_conditions.go:102] verifying NodePressure condition ...
	I0812 23:57:01.799736  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:01.808610  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:01.822465  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:01.822901  677922 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0812 23:57:01.822932  677922 node_conditions.go:123] node cpu capacity is 8
	I0812 23:57:01.822956  677922 node_conditions.go:105] duration metric: took 117.631673ms to run NodePressure ...
	I0812 23:57:01.822976  677922 start.go:231] waiting for startup goroutines ...
	I0812 23:57:02.298720  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:02.308761  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:02.317794  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:02.799510  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:02.807317  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:02.817488  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:03.298610  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:03.308120  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:03.317984  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:03.798525  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:03.808457  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:03.821138  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:04.300246  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:04.307938  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:04.319948  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:04.799146  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:04.807550  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:04.822603  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:05.299116  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:05.307943  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:05.317983  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:05.799690  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:05.811802  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:05.818818  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:06.299602  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:06.309649  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:06.320703  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:06.891692  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:06.911636  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:06.991052  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:07.301502  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:07.308966  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:07.318041  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:07.798633  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:07.808320  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:07.817456  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:08.304981  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:08.309797  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:08.318716  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:08.798842  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:08.808208  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:08.817572  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:09.298936  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:09.308569  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:09.318866  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:09.799126  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:09.807933  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:09.817575  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:10.299434  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:10.307561  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:10.317548  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:10.803171  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:10.807361  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:10.817585  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:11.299786  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:11.308751  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:11.317478  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:11.799595  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:11.808374  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:11.817746  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:12.299017  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:12.314652  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:12.318412  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:12.801189  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:12.811529  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:12.890934  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:13.300745  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:13.308699  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:13.318969  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:13.799360  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:13.807915  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:13.817157  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:14.298784  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:14.308772  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:14.318425  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:14.802882  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:14.807875  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:14.816995  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:15.299658  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:15.308510  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:15.318270  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:15.799588  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:15.809585  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:15.822169  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:16.299222  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:16.308507  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:16.318152  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:16.799760  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:16.808553  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:16.818244  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:17.300104  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:17.308898  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:17.318418  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:17.803466  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:17.808052  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:17.817619  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:18.298865  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:18.309132  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:18.316943  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:18.799348  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:18.808021  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:18.817349  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:19.299786  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:19.309510  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:19.317946  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:19.799482  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:19.808391  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:19.818142  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:20.299573  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:20.308042  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:20.317456  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:20.798894  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0812 23:57:20.807869  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:20.817145  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:21.298991  677922 kapi.go:108] duration metric: took 1m10.586972024s to wait for kubernetes.io/minikube-addons=registry ...
	I0812 23:57:21.307757  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:21.325105  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:21.808400  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:21.818907  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:22.308546  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:22.316965  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:22.808353  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:22.818000  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:23.307521  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:23.318073  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:23.809636  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:23.817005  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:24.308375  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:24.318134  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:24.808751  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:24.817833  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:25.308436  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:25.317582  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:25.809507  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:25.817949  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:26.308320  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:26.317668  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:26.809374  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:26.818006  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:27.308428  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:27.318103  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:27.808576  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:27.818988  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:28.808867  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:28.817608  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:29.309643  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:29.318814  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:29.809015  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:29.817563  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:30.418088  677922 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0812 23:57:30.497400  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:30.814955  677922 kapi.go:108] duration metric: took 1m15.114911482s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0812 23:57:30.896809  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:31.397015  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:31.893519  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:32.396234  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:33.119378  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:33.391827  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:33.895566  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:34.395176  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:34.900549  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:35.392564  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:35.891575  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:36.400400  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:36.906943  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:37.392306  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:38.098277  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:38.394980  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:38.892362  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:39.403141  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:39.896842  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:40.393736  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:40.898365  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:41.318086  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:41.818395  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:42.318880  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:42.818260  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:43.318930  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:43.817583  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:44.318430  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:44.818814  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:45.318299  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:45.818088  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:46.318346  677922 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0812 23:57:46.818186  677922 kapi.go:108] duration metric: took 1m34.516722107s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0812 23:57:46.820592  677922 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, helm-tiller, metrics-server, olm, volumesnapshots, registry, csi-hostpath-driver, ingress
	I0812 23:57:46.820614  677922 addons.go:344] enableAddons completed in 1m39.616395904s
	I0812 23:57:46.876534  677922 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0812 23:57:46.879229  677922 out.go:177] * Done! kubectl is now configured to use "addons-20210812235522-676638" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Thu 2021-08-12 23:55:25 UTC, end at Fri 2021-08-13 00:04:01 UTC. --
	Aug 13 00:00:03 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:00:03.313911742Z" level=info msg="Stopping pod sandbox: 2a2bba747b0dbd64168fbcfce13e3c2164f4c8a1aef0681ed033949dec3c3a0c" id=faeea361-81c1-4e3e-a403-b90fe2cb730c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:00:03 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:00:03.313958496Z" level=info msg="Stopped pod sandbox (already stopped): 2a2bba747b0dbd64168fbcfce13e3c2164f4c8a1aef0681ed033949dec3c3a0c" id=faeea361-81c1-4e3e-a403-b90fe2cb730c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:00:03 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:00:03.314289235Z" level=info msg="Removing pod sandbox: 2a2bba747b0dbd64168fbcfce13e3c2164f4c8a1aef0681ed033949dec3c3a0c" id=4c8badac-cbcc-479b-9e40-cba5df0e6bb3 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 13 00:00:03 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:00:03.429395238Z" level=info msg="Removed pod sandbox: 2a2bba747b0dbd64168fbcfce13e3c2164f4c8a1aef0681ed033949dec3c3a0c" id=4c8badac-cbcc-479b-9e40-cba5df0e6bb3 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 13 00:00:03 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:00:03.429991353Z" level=info msg="Stopping pod sandbox: 1c7bb763b285b3ab70fa98a10e9df029efd1c8a3f1041a6c9761e945e1e587f3" id=605c5a8c-0a4b-44f8-967b-0cc8b51510a2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:00:03 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:00:03.430044637Z" level=info msg="Stopped pod sandbox (already stopped): 1c7bb763b285b3ab70fa98a10e9df029efd1c8a3f1041a6c9761e945e1e587f3" id=605c5a8c-0a4b-44f8-967b-0cc8b51510a2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:00:03 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:00:03.430388917Z" level=info msg="Removing pod sandbox: 1c7bb763b285b3ab70fa98a10e9df029efd1c8a3f1041a6c9761e945e1e587f3" id=3c00998a-4ee2-453c-8d3d-01218cef49f7 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 13 00:00:03 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:00:03.545389281Z" level=info msg="Removed pod sandbox: 1c7bb763b285b3ab70fa98a10e9df029efd1c8a3f1041a6c9761e945e1e587f3" id=3c00998a-4ee2-453c-8d3d-01218cef49f7 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Aug 13 00:00:57 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:00:57.346960467Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.4.1" id=b745cdc3-ec8a-4fff-a455-aec9afc80354 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:00:57 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:00:57.347595786Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253,RepoTags:[k8s.gcr.io/pause:3.4.1],RepoDigests:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2],Size_:689817,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b745cdc3-ec8a-4fff-a455-aec9afc80354 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:03:33 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:33.734685223Z" level=info msg="Stopping container: 04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01 (timeout: 29s)" id=f110afa5-6860-47f1-8c9c-d739abb6cb38 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 13 00:03:43 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:43.952921263Z" level=info msg="Stopped container 04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01: ingress-nginx/ingress-nginx-controller-59b45fb494-9st7d/controller" id=f110afa5-6860-47f1-8c9c-d739abb6cb38 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 13 00:03:43 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:43.953645934Z" level=info msg="Stopping pod sandbox: 4e25882a7b21b93053e12b39dde950155721d4aa534b5b1e00849212531e157e" id=16df05e3-5ece-40db-8d94-0ca071871bdd name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:03:43 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:43.966445695Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-59b45fb494-9st7d Namespace:ingress-nginx ID:4e25882a7b21b93053e12b39dde950155721d4aa534b5b1e00849212531e157e NetNS:/var/run/netns/1b6826f5-8176-4092-986c-5e72d18e3e7a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 13 00:03:43 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:43.966742822Z" level=info msg="About to del CNI network kindnet (type=ptp)"
	Aug 13 00:03:44 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:44.221026396Z" level=info msg="Stopped pod sandbox: 4e25882a7b21b93053e12b39dde950155721d4aa534b5b1e00849212531e157e" id=16df05e3-5ece-40db-8d94-0ca071871bdd name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:03:44 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:44.262532788Z" level=info msg="Stopping pod sandbox: 4e25882a7b21b93053e12b39dde950155721d4aa534b5b1e00849212531e157e" id=f375acc0-430a-46b5-a844-61a7abb3bcef name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:03:44 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:44.262587638Z" level=info msg="Stopped pod sandbox (already stopped): 4e25882a7b21b93053e12b39dde950155721d4aa534b5b1e00849212531e157e" id=f375acc0-430a-46b5-a844-61a7abb3bcef name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:03:44 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:44.263291203Z" level=info msg="Removing container: 04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01" id=47666f24-236b-48c5-bc04-77619dc3f75b name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 00:03:44 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:44.285340491Z" level=info msg="Removed container 04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01: ingress-nginx/ingress-nginx-controller-59b45fb494-9st7d/controller" id=47666f24-236b-48c5-bc04-77619dc3f75b name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 00:03:45 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:45.267699331Z" level=info msg="Stopping pod sandbox: 4e25882a7b21b93053e12b39dde950155721d4aa534b5b1e00849212531e157e" id=ad1bb40d-a333-436b-bc27-e8c5bd02ec77 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:03:45 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:45.267767895Z" level=info msg="Stopped pod sandbox (already stopped): 4e25882a7b21b93053e12b39dde950155721d4aa534b5b1e00849212531e157e" id=ad1bb40d-a333-436b-bc27-e8c5bd02ec77 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:03:45 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:45.281171577Z" level=info msg="Stopping container: 04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01 (timeout: 2s)" id=d9f923a4-4435-43f8-81ec-ceae32894e11 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 13 00:03:45 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:45.281810124Z" level=info msg="Stopping pod sandbox: 4e25882a7b21b93053e12b39dde950155721d4aa534b5b1e00849212531e157e" id=5221ce63-5585-453d-a66f-29ed6cebd579 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:03:45 addons-20210812235522-676638 crio[371]: time="2021-08-13 00:03:45.281861482Z" level=info msg="Stopped pod sandbox (already stopped): 4e25882a7b21b93053e12b39dde950155721d4aa534b5b1e00849212531e157e" id=5221ce63-5585-453d-a66f-29ed6cebd579 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	6a71e3272e865       docker.io/library/nginx@sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce                                                 4 minutes ago       Running             nginx                     0                   d5e943993ebba
	1ad7ee87195b4       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                5 minutes ago       Running             etcd-restore-operator     0                   5ae96b784512f
	829c90dd51959       9d5c51d92fbddcda022478def5889a9ceb074305d83f2336cfc228827a03d5d5                                                                                5 minutes ago       Running             etcd-backup-operator      0                   5ae96b784512f
	69942867557b1       quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b                                            5 minutes ago       Running             etcd-operator             0                   5ae96b784512f
	465ba14f07718       europe-west1-docker.pkg.dev/k8s-minikube/test-artifacts-eu/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8   5 minutes ago       Running             private-image-eu          0                   ee1ffe6407993
	dee4ec83bfdf5       us-docker.pkg.dev/k8s-minikube/test-artifacts/echoserver@sha256:17d678b5667fde46507d8018fb6834dcfd102e02b485a817d95dd686ff82dda8                5 minutes ago       Running             private-image             0                   27448ad54ba28
	ea7c486ce659a       docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                               5 minutes ago       Running             busybox                   0                   1ffb15fc7f73c
	bc38fbd8eb2d1       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             packageserver             0                   9e1234acb00f6
	95575fb7fe2d0       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          6 minutes ago       Running             packageserver             0                   45856153f7c47
	65a65e5189350       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6                                  6 minutes ago       Exited              patch                     0                   5ba07b522b6a4
	eea7ab83c5275       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6                                  6 minutes ago       Exited              create                    0                   9adccbb9a7f7d
	3e3d268e7535d       d5444025797471ee73017096cffe85f4b149d404a3ccfdd7391d6046b88bf8f2                                                                                6 minutes ago       Running             olm-operator              0                   a6fd60960a155
	acdad570d3793       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                                                                7 minutes ago       Running             coredns                   0                   0a140d2aacb90
	1386d842459cc       quay.io/operator-framework/upstream-community-operators@sha256:cc7b3fdaa1ccdea5866fcd171669dc0ed88d3477779d8ed32e3712c827e38cc0                 7 minutes ago       Running             registry-server           0                   943bd37c6838c
	4f1ce903d0928       quay.io/operator-framework/olm@sha256:de396b540b82219812061d0d753440d5655250c621c753ed1dc67d6154741607                                          7 minutes ago       Running             catalog-operator          0                   8338658cae426
	1d123594ffdbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                                7 minutes ago       Running             storage-provisioner       0                   907b8bce51787
	5dc932415a946       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                                                                7 minutes ago       Running             kube-proxy                0                   e24425517397b
	09469b7baf727       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                                                                7 minutes ago       Running             kindnet-cni               0                   b932d08e7f7bf
	a8431eb6b4eda       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                                                                8 minutes ago       Running             kube-scheduler            0                   7b18cf1e62325
	7fd00539d952b       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                                                                8 minutes ago       Running             kube-controller-manager   0                   7189ec47d347e
	2dd80f731bafa       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                                                                8 minutes ago       Running             kube-apiserver            0                   1e1191b5d010c
	f128e14cddf1e       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                                                                8 minutes ago       Running             etcd                      0                   1444e1c4c2533
	
	* 
	* ==> coredns [acdad570d3793dd17a5ff47c1eec1093e7f69f0cf18b2c8f3b780c84a31f4e9a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210812235522-676638
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20210812235522-676638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=addons-20210812235522-676638
	                    minikube.k8s.io/updated_at=2021_08_12T23_55_52_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210812235522-676638
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Aug 2021 23:55:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210812235522-676638
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 00:03:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Aug 2021 23:59:28 +0000   Thu, 12 Aug 2021 23:55:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Aug 2021 23:59:28 +0000   Thu, 12 Aug 2021 23:55:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Aug 2021 23:59:28 +0000   Thu, 12 Aug 2021 23:55:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Aug 2021 23:59:28 +0000   Thu, 12 Aug 2021 23:56:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210812235522-676638
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 760e67beb8554645829f2357c8eb4ae7
	  System UUID:                97bd5c3c-6ec5-49ca-80c4-fc32c3f88741
	  Boot ID:                    f12e4c71-5c79-4cb7-b9de-5d4c99f61cf1
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  default                     nginx                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  default                     private-image-7ff9c8c74f-fhjrq                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  default                     private-image-eu-5956d58f9f-7s9x9                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 coredns-558bd4d5db-bdsbb                                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     7m54s
	  kube-system                 etcd-addons-20210812235522-676638                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m11s
	  kube-system                 kindnet-8n78k                                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m55s
	  kube-system                 kube-apiserver-addons-20210812235522-676638             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m4s
	  kube-system                 kube-controller-manager-addons-20210812235522-676638    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m4s
	  kube-system                 kube-proxy-nswlp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 kube-scheduler-addons-20210812235522-676638             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  my-etcd                     etcd-operator-85cd4f54cd-bkcfg                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  olm                         catalog-operator-75d496484d-zt4zw                       10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (0%!)(MISSING)        0 (0%!)(MISSING)         7m48s
	  olm                         olm-operator-859c88c96-dz428                            10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m48s
	  olm                         operatorhubio-catalog-5s8mk                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      50Mi (0%!)(MISSING)        0 (0%!)(MISSING)         7m12s
	  olm                         packageserver-745f6f5d79-hrgv8                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  olm                         packageserver-745f6f5d79-ss4c2                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                880m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             510Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m21s (x5 over 8m21s)  kubelet     Node addons-20210812235522-676638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m21s (x3 over 8m21s)  kubelet     Node addons-20210812235522-676638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m21s (x3 over 8m21s)  kubelet     Node addons-20210812235522-676638 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m4s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m4s                   kubelet     Node addons-20210812235522-676638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m4s                   kubelet     Node addons-20210812235522-676638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m4s                   kubelet     Node addons-20210812235522-676638 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m55s                  kubelet     Node addons-20210812235522-676638 status is now: NodeReady
	  Normal  Starting                 7m51s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000003] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[  +2.015859] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[  +4.159678] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[  +8.195484] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[ +16.122901] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[Aug13 00:00] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[Aug13 00:01] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[  +1.030001] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[  +2.015858] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[  +4.031749] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[  +8.191436] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000003] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[ +16.126895] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	[Aug13 00:02] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000002] ll header: 00000000: a2 d1 be f9 60 7e fe 3e 8a 02 26 32 08 00        ....`~.>..&2..
	
	* 
	* ==> etcd [1ad7ee87195b48fa7b2396ade603d374a3efb9c98fb66e8800f474a71e73011f] <==
	* time="2021-08-12T23:59:00Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-12T23:59:00Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-12T23:59:00Z" level=info msg="etcd-restore-operator Version: 0.9.4"
	time="2021-08-12T23:59:00Z" level=info msg="Git SHA: c8a1c64"
	E0812 23:59:00.497743       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-restore-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"f4d202c4-76ec-4a7c-8afd-042f2a1ace5e", ResourceVersion:"2014", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764409540, loc:(*time.Location)(0x24e11a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"etcd-operator-alm-owned"}, Annotations:map[string]string{"endpoints.kubernetes.io/last-change-trigger-time":"2021-08-12T23:59:00Z", "control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-bkcfg\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-12T23:59:00Z\",\"renewTime\":\"2021-08-12T23:59:00Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), Cl
usterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-bkcfg became leader'
	time="2021-08-12T23:59:00Z" level=info msg="listening on 0.0.0.0:19999"
	time="2021-08-12T23:59:00Z" level=info msg="starting restore controller" pkg=controller
	
	* 
	* ==> etcd [69942867557b1f81867fc57444f28b0b2b153fe0235fa7007547a8932482d4ca] <==
	* time="2021-08-12T23:58:59Z" level=info msg="etcd-operator Version: 0.9.4"
	time="2021-08-12T23:58:59Z" level=info msg="Git SHA: c8a1c64"
	time="2021-08-12T23:58:59Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-12T23:58:59Z" level=info msg="Go OS/Arch: linux/amd64"
	E0812 23:58:59.418596       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"cfdd66ec-3418-4b1a-927a-4933c786de75", ResourceVersion:"1998", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764409539, loc:(*time.Location)(0x20d4640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-bkcfg\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-12T23:58:59Z\",\"renewTime\":\"2021-08-12T23:58:59Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not r
eport event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-bkcfg became leader'
	
	* 
	* ==> etcd [829c90dd519597cc73380448fa65175a92d60b38b6ffcb9ae2cf7a173df5ff65] <==
	* time="2021-08-12T23:58:59Z" level=info msg="Go Version: go1.11.5"
	time="2021-08-12T23:58:59Z" level=info msg="Go OS/Arch: linux/amd64"
	time="2021-08-12T23:58:59Z" level=info msg="etcd-backup-operator Version: 0.9.4"
	time="2021-08-12T23:58:59Z" level=info msg="Git SHA: c8a1c64"
	E0812 23:58:59.817561       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-backup-operator", GenerateName:"", Namespace:"my-etcd", SelfLink:"", UID:"bb5c5554-b4e4-4e3f-8a65-c29006bf7b37", ResourceVersion:"2005", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764409539, loc:(*time.Location)(0x25824c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"etcd-operator-85cd4f54cd-bkcfg\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-12T23:58:59Z\",\"renewTime\":\"2021-08-12T23:58:59Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Wil
l not report event: 'Normal' 'LeaderElection' 'etcd-operator-85cd4f54cd-bkcfg became leader'
	time="2021-08-12T23:58:59Z" level=info msg="starting backup controller" pkg=controller
	
	* 
	* ==> etcd [f128e14cddf1e4d1b4a9a553d68ebb0cad1798aa6f8a1b5b1e2df5df234023dd] <==
	* 2021-08-12 23:59:58.791498 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:00:08.791491 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:00:18.791709 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:00:28.791263 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:00:38.791846 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:00:48.791430 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:00:58.791292 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:01:08.791553 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:01:18.791898 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:01:28.791842 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:01:38.791151 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:01:48.791156 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:01:58.791857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:02:08.791930 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:02:18.791340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:02:28.791469 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:02:38.791503 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:02:48.791843 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:02:58.791585 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:03:08.791378 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:03:18.792057 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:03:28.791547 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:03:38.792082 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:03:48.791453 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:03:58.791275 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  00:04:02 up  3:46,  0 users,  load average: 0.15, 1.51, 2.68
	Linux addons-20210812235522-676638 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [2dd80f731bafa73f677e82e6f346902b56e5d00fd5e2e4c79c8f6b9ab0bafea1] <==
	* I0812 23:59:30.603421       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 00:00:11.752901       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:00:11.752955       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:00:11.752964       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 00:00:44.228765       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:00:44.228820       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:00:44.228830       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 00:01:17.131150       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:01:17.131204       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:01:17.131213       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 00:01:56.856665       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:01:56.856718       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:01:56.856727       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 00:02:36.626718       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:02:36.626763       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:02:36.626772       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 00:03:13.110857       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:03:13.110913       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:03:13.110922       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0813 00:03:32.596445       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0813 00:03:32.735484       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0813 00:03:40.099920       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0813 00:03:48.371308       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:03:48.371360       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:03:48.371369       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [7fd00539d952b2da9249f659463e89dcaccd6e41d13948ab1603a49b4ed5d93f] <==
	* E0812 23:59:34.519197       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0812 23:59:37.064952       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0812 23:59:37.064998       1 shared_informer.go:247] Caches are synced for resource quota 
	I0812 23:59:37.364585       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0812 23:59:37.364639       1 shared_informer.go:247] Caches are synced for garbage collector 
	E0812 23:59:38.818164       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:59:55.029852       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:59:56.259179       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0812 23:59:59.711907       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:00:32.537883       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:00:37.779423       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:00:40.231639       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:01:03.017156       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:01:28.314646       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:01:37.031760       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:01:38.076315       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:02:11.005521       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:02:14.067087       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:02:19.743768       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:02:41.607519       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:03:07.639804       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:03:08.629035       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:03:28.104462       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 00:03:37.489714       1 tokens_controller.go:262] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-gjv8n" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	E0813 00:03:45.123532       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [5dc932415a946dfeefae176fd56deec43ab13507239154eafe00cc54435bb30c] <==
	* I0812 23:56:10.713837       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0812 23:56:10.713894       1 server_others.go:140] Detected node IP 192.168.49.2
	W0812 23:56:10.713919       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0812 23:56:10.908030       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0812 23:56:10.908086       1 server_others.go:212] Using iptables Proxier.
	I0812 23:56:10.908101       1 server_others.go:219] creating dualStackProxier for iptables.
	W0812 23:56:10.908134       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0812 23:56:10.908594       1 server.go:643] Version: v1.21.3
	I0812 23:56:10.916136       1 config.go:224] Starting endpoint slice config controller
	I0812 23:56:10.916175       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0812 23:56:10.916702       1 config.go:315] Starting service config controller
	I0812 23:56:10.916739       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0812 23:56:11.002620       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0812 23:56:11.005339       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0812 23:56:11.016820       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0812 23:56:11.016902       1 shared_informer.go:247] Caches are synced for service config 
	W0813 00:03:50.007177       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [a8431eb6b4eda6025706f30a2a4cc70e6a2748c07a3129bd7b6442fd2b435686] <==
	* E0812 23:55:47.013338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 23:55:47.013341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 23:55:47.013368       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 23:55:47.013403       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 23:55:47.013504       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 23:55:47.013551       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 23:55:47.013619       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 23:55:47.879390       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0812 23:55:47.983115       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0812 23:55:48.008977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 23:55:48.021485       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0812 23:55:48.035785       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0812 23:55:48.118632       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0812 23:55:48.126249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0812 23:55:48.161716       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0812 23:55:48.219412       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 23:55:48.247006       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0812 23:55:48.413968       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0812 23:55:48.453465       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0812 23:55:48.453468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0812 23:55:48.542781       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0812 23:55:49.893754       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0812 23:55:50.007360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0812 23:55:50.042091       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0812 23:55:54.810873       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2021-08-12 23:55:25 UTC, end at Fri 2021-08-13 00:04:02 UTC. --
	Aug 13 00:03:32 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:32.609597    1563 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-9st7d.169ab4ce1dd6f66c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-9st7d", UID:"b4fbca6c-a710-4b79-a48c-6c2bd240db3d", APIVersion:"v1", ResourceVersion:"599", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopp
ing container controller", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210812235522-676638"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b95240bae6c, ext:460794587392, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b95240bae6c, ext:460794587392, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-9st7d.169ab4ce1dd6f66c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 13 00:03:33 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:33.517850    1563 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298/docker/7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:03:34 addons-20210812235522-676638 kubelet[1563]: I0813 00:03:34.378095    1563 kubelet_pods.go:895] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/private-image-7ff9c8c74f-fhjrq" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 00:03:41 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:41.719389    1563 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-9st7d.169ab4d03cd3d6cf", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-9st7d", UID:"b4fbca6c-a710-4b79-a48c-6c2bd240db3d", APIVersion:"v1", ResourceVersion:"599", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Liv
eness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210812235522-676638"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b976a9774cf, ext:469904410977, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b976a9774cf, ext:469904410977, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-9st7d.169ab4d03cd3d6cf" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 13 00:03:41 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:41.720920    1563 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-9st7d.169ab4d03cd7157c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-9st7d", UID:"b4fbca6c-a710-4b79-a48c-6c2bd240db3d", APIVersion:"v1", ResourceVersion:"599", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Rea
diness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210812235522-676638"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b976a9ab37c, ext:469904623648, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b976a9ab37c, ext:469904623648, loc:(*time.Location)(0x74c3600)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-9st7d.169ab4d03cd7157c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 13 00:03:43 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:43.643067    1563 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298/docker/7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:03:44 addons-20210812235522-676638 kubelet[1563]: I0813 00:03:44.262262    1563 scope.go:111] "RemoveContainer" containerID="04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01"
	Aug 13 00:03:44 addons-20210812235522-676638 kubelet[1563]: I0813 00:03:44.285671    1563 scope.go:111] "RemoveContainer" containerID="04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01"
	Aug 13 00:03:44 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:44.286149    1563 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01\": container with ID starting with 04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01 not found: ID does not exist" containerID="04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01"
	Aug 13 00:03:44 addons-20210812235522-676638 kubelet[1563]: I0813 00:03:44.286203    1563 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:cri-o ID:04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01} err="failed to get container status \"04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01\": rpc error: code = NotFound desc = could not find container \"04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01\": container with ID starting with 04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01 not found: ID does not exist"
	Aug 13 00:03:44 addons-20210812235522-676638 kubelet[1563]: I0813 00:03:44.462119    1563 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b4fbca6c-a710-4b79-a48c-6c2bd240db3d-webhook-cert\") pod \"b4fbca6c-a710-4b79-a48c-6c2bd240db3d\" (UID: \"b4fbca6c-a710-4b79-a48c-6c2bd240db3d\") "
	Aug 13 00:03:44 addons-20210812235522-676638 kubelet[1563]: I0813 00:03:44.462175    1563 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-722mx\" (UniqueName: \"kubernetes.io/projected/b4fbca6c-a710-4b79-a48c-6c2bd240db3d-kube-api-access-722mx\") pod \"b4fbca6c-a710-4b79-a48c-6c2bd240db3d\" (UID: \"b4fbca6c-a710-4b79-a48c-6c2bd240db3d\") "
	Aug 13 00:03:44 addons-20210812235522-676638 kubelet[1563]: I0813 00:03:44.489810    1563 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fbca6c-a710-4b79-a48c-6c2bd240db3d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b4fbca6c-a710-4b79-a48c-6c2bd240db3d" (UID: "b4fbca6c-a710-4b79-a48c-6c2bd240db3d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 13 00:03:44 addons-20210812235522-676638 kubelet[1563]: I0813 00:03:44.489838    1563 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fbca6c-a710-4b79-a48c-6c2bd240db3d-kube-api-access-722mx" (OuterVolumeSpecName: "kube-api-access-722mx") pod "b4fbca6c-a710-4b79-a48c-6c2bd240db3d" (UID: "b4fbca6c-a710-4b79-a48c-6c2bd240db3d"). InnerVolumeSpecName "kube-api-access-722mx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 00:03:44 addons-20210812235522-676638 kubelet[1563]: I0813 00:03:44.563244    1563 reconciler.go:319] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b4fbca6c-a710-4b79-a48c-6c2bd240db3d-webhook-cert\") on node \"addons-20210812235522-676638\" DevicePath \"\""
	Aug 13 00:03:44 addons-20210812235522-676638 kubelet[1563]: I0813 00:03:44.563313    1563 reconciler.go:319] "Volume detached for volume \"kube-api-access-722mx\" (UniqueName: \"kubernetes.io/projected/b4fbca6c-a710-4b79-a48c-6c2bd240db3d-kube-api-access-722mx\") on node \"addons-20210812235522-676638\" DevicePath \"\""
	Aug 13 00:03:45 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:45.281473    1563 remote_runtime.go:276] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01\": container with ID starting with 04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01 not found: ID does not exist" containerID="04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01"
	Aug 13 00:03:45 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:45.281536    1563 kuberuntime_container.go:666] "Container termination failed with gracePeriod" err="rpc error: code = NotFound desc = could not find container \"04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01\": container with ID starting with 04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01 not found: ID does not exist" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-9st7d" podUID=b4fbca6c-a710-4b79-a48c-6c2bd240db3d containerName="controller" containerID="cri-o://04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01" gracePeriod=2
	Aug 13 00:03:45 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:45.281559    1563 kuberuntime_container.go:691] "Kill container failed" err="rpc error: code = NotFound desc = could not find container \"04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01\": container with ID starting with 04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01 not found: ID does not exist" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-9st7d" podUID=b4fbca6c-a710-4b79-a48c-6c2bd240db3d containerName="controller" containerID={Type:cri-o ID:04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01}
	Aug 13 00:03:45 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:45.282056    1563 kubelet_pods.go:1288] "Failed killing the pod" err="failed to \"KillContainer\" for \"controller\" with KillContainerError: \"rpc error: code = NotFound desc = could not find container \\\"04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01\\\": container with ID starting with 04d4eb4b43c98ceedb389a9206ef0be1e8dfee8b5594d2cc2937d94ff7292f01 not found: ID does not exist\"" podName="ingress-nginx-controller-59b45fb494-9st7d"
	Aug 13 00:03:45 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:45.284009    1563 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-59b45fb494-9st7d.169ab4ce1dd6f66c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-59b45fb494-9st7d", UID:"b4fbca6c-a710-4b79-a48c-6c2bd240db3d", APIVersion:"v1", ResourceVersion:"599", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopp
ing container controller", Source:v1.EventSource{Component:"kubelet", Host:"addons-20210812235522-676638"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b95240bae6c, ext:460794587392, loc:(*time.Location)(0x74c3600)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc03d8b9850bd1b00, ext:473470670741, loc:(*time.Location)(0x74c3600)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-59b45fb494-9st7d.169ab4ce1dd6f66c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 13 00:03:53 addons-20210812235522-676638 kubelet[1563]: W0813 00:03:53.765807    1563 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 00:03:53 addons-20210812235522-676638 kubelet[1563]: W0813 00:03:53.771923    1563 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	Aug 13 00:03:53 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:53.773362    1563 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298/docker/7dd277d6b392d00b94a317757e345dabb569218ca153823613ed6e3685b06298\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:03:53 addons-20210812235522-676638 kubelet[1563]: E0813 00:03:53.854077    1563 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/b4fbca6c-a710-4b79-a48c-6c2bd240db3d/etc-hosts with error exit status 1" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-9st7d"
	
	* 
	* ==> storage-provisioner [1d123594ffdbc4d3b4c308f4a41a652a4064d86168457453667f63d256e3ac78] <==
	* I0812 23:56:12.910754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0812 23:56:13.002546       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0812 23:56:13.002588       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0812 23:56:13.101651       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0812 23:56:13.104047       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210812235522-676638_29bcee1b-afaf-4e6b-99b3-dbfc7495a971!
	I0812 23:56:13.105414       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c200b11-8732-4957-a4c0-365fa538e892", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210812235522-676638_29bcee1b-afaf-4e6b-99b3-dbfc7495a971 became leader
	I0812 23:56:13.207402       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210812235522-676638_29bcee1b-afaf-4e6b-99b3-dbfc7495a971!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20210812235522-676638 -n addons-20210812235522-676638
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210812235522-676638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210812235522-676638 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210812235522-676638 describe pod : exit status 1 (53.2279ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210812235522-676638 describe pod : exit status 1
--- FAIL: TestAddons/parallel/Ingress (312.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-j4hzl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-j4hzl -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-j4hzl -- sh -c "ping -c 1 192.168.49.1": exit status 1 (211.483462ms)

                                                
                                                
-- stdout --
	PING 192.168.49.1 (192.168.49.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (192.168.49.1) from pod (busybox-84b6686758-j4hzl): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-pzxgm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-pzxgm -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-pzxgm -- sh -c "ping -c 1 192.168.49.1": exit status 1 (201.458449ms)

                                                
                                                
-- stdout --
	PING 192.168.49.1 (192.168.49.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:538: Failed to ping host (192.168.49.1) from pod (busybox-84b6686758-pzxgm): exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect multinode-20210813001157-676638
helpers_test.go:236: (dbg) docker inspect multinode-20210813001157-676638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80",
	        "Created": "2021-08-13T00:11:59.258049375Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 743877,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T00:11:59.712377436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/hostname",
	        "HostsPath": "/var/lib/docker/containers/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/hosts",
	        "LogPath": "/var/lib/docker/containers/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80-json.log",
	        "Name": "/multinode-20210813001157-676638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20210813001157-676638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20210813001157-676638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/06458e533da4fd05661e49dfd561f6bd1ff3bcc1dbd3ddc4f2548d626f2a3af1-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/06458e533da4fd05661e49dfd561f6bd1ff3bcc1dbd3ddc4f2548d626f2a3af1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/06458e533da4fd05661e49dfd561f6bd1ff3bcc1dbd3ddc4f2548d626f2a3af1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/06458e533da4fd05661e49dfd561f6bd1ff3bcc1dbd3ddc4f2548d626f2a3af1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20210813001157-676638",
	                "Source": "/var/lib/docker/volumes/multinode-20210813001157-676638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20210813001157-676638",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20210813001157-676638",
	                "name.minikube.sigs.k8s.io": "multinode-20210813001157-676638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c4d66fb0d0ef70475adef34956f74f2eeda6a626a4cf77102169b017248c4e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33293"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33292"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33289"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33291"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33290"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/93c4d66fb0d0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20210813001157-676638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "57ce5ea60f50"
	                    ],
	                    "NetworkID": "8ebb9458ee27d6ec154c38560b48cfe312439ad6b7e924e52da2f8909cb19a3e",
	                    "EndpointID": "dfa87c2bbd58d940928b2e3de3c41a5d2ecf9dc8b1fe8ca99826a31a2294c062",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20210813001157-676638 -n multinode-20210813001157-676638
helpers_test.go:245: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813001157-676638 logs -n 25: (1.345051268s)
helpers_test.go:253: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                 |   User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| pause   | -p                                                | json-output-20210813000904-676638       | testUser | v1.22.0 | Fri, 13 Aug 2021 00:10:13 UTC | Fri, 13 Aug 2021 00:10:14 UTC |
	|         | json-output-20210813000904-676638                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| unpause | -p                                                | json-output-20210813000904-676638       | testUser | v1.22.0 | Fri, 13 Aug 2021 00:10:14 UTC | Fri, 13 Aug 2021 00:10:15 UTC |
	|         | json-output-20210813000904-676638                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| stop    | -p                                                | json-output-20210813000904-676638       | testUser | v1.22.0 | Fri, 13 Aug 2021 00:10:15 UTC | Fri, 13 Aug 2021 00:10:26 UTC |
	|         | json-output-20210813000904-676638                 |                                         |          |         |                               |                               |
	|         | --output=json --user=testUser                     |                                         |          |         |                               |                               |
	| delete  | -p                                                | json-output-20210813000904-676638       | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:10:26 UTC | Fri, 13 Aug 2021 00:10:32 UTC |
	|         | json-output-20210813000904-676638                 |                                         |          |         |                               |                               |
	| delete  | -p                                                | json-output-error-20210813001032-676638 | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:10:32 UTC | Fri, 13 Aug 2021 00:10:33 UTC |
	|         | json-output-error-20210813001032-676638           |                                         |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210813001033-676638    | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:10:33 UTC | Fri, 13 Aug 2021 00:11:01 UTC |
	|         | docker-network-20210813001033-676638              |                                         |          |         |                               |                               |
	|         | --network=                                        |                                         |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210813001033-676638    | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:11:01 UTC | Fri, 13 Aug 2021 00:11:04 UTC |
	|         | docker-network-20210813001033-676638              |                                         |          |         |                               |                               |
	| start   | -p                                                | docker-network-20210813001104-676638    | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:11:04 UTC | Fri, 13 Aug 2021 00:11:28 UTC |
	|         | docker-network-20210813001104-676638              |                                         |          |         |                               |                               |
	|         | --network=bridge                                  |                                         |          |         |                               |                               |
	| delete  | -p                                                | docker-network-20210813001104-676638    | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:11:28 UTC | Fri, 13 Aug 2021 00:11:30 UTC |
	|         | docker-network-20210813001104-676638              |                                         |          |         |                               |                               |
	| start   | -p                                                | existing-network-20210813001130-676638  | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:11:31 UTC | Fri, 13 Aug 2021 00:11:54 UTC |
	|         | existing-network-20210813001130-676638            |                                         |          |         |                               |                               |
	|         | --network=existing-network                        |                                         |          |         |                               |                               |
	| delete  | -p                                                | existing-network-20210813001130-676638  | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:11:54 UTC | Fri, 13 Aug 2021 00:11:57 UTC |
	|         | existing-network-20210813001130-676638            |                                         |          |         |                               |                               |
	| start   | -p                                                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:11:57 UTC | Fri, 13 Aug 2021 00:13:57 UTC |
	|         | multinode-20210813001157-676638                   |                                         |          |         |                               |                               |
	|         | --wait=true --memory=2200                         |                                         |          |         |                               |                               |
	|         | --nodes=2 -v=8                                    |                                         |          |         |                               |                               |
	|         | --alsologtostderr                                 |                                         |          |         |                               |                               |
	|         | --driver=docker                                   |                                         |          |         |                               |                               |
	|         | --container-runtime=crio                          |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813001157-676638 -- apply -f    | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:13:57 UTC | Fri, 13 Aug 2021 00:13:58 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:13:58 UTC | Fri, 13 Aug 2021 00:14:28 UTC |
	|         | multinode-20210813001157-676638                   |                                         |          |         |                               |                               |
	|         | -- rollout status                                 |                                         |          |         |                               |                               |
	|         | deployment/busybox                                |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813001157-676638                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:14:28 UTC | Fri, 13 Aug 2021 00:14:28 UTC |
	|         | -- get pods -o                                    |                                         |          |         |                               |                               |
	|         | jsonpath='{.items[*].status.podIP}'               |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813001157-676638                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:14:28 UTC | Fri, 13 Aug 2021 00:14:28 UTC |
	|         | -- get pods -o                                    |                                         |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:14:28 UTC | Fri, 13 Aug 2021 00:14:29 UTC |
	|         | multinode-20210813001157-676638                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-j4hzl --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.io                            |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:14:29 UTC | Fri, 13 Aug 2021 00:14:29 UTC |
	|         | multinode-20210813001157-676638                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-pzxgm --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.io                            |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:14:29 UTC | Fri, 13 Aug 2021 00:14:29 UTC |
	|         | multinode-20210813001157-676638                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-j4hzl --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.default                       |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:14:29 UTC | Fri, 13 Aug 2021 00:14:29 UTC |
	|         | multinode-20210813001157-676638                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-pzxgm --                       |                                         |          |         |                               |                               |
	|         | nslookup kubernetes.default                       |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813001157-676638                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:14:30 UTC | Fri, 13 Aug 2021 00:14:30 UTC |
	|         | -- exec busybox-84b6686758-j4hzl                  |                                         |          |         |                               |                               |
	|         | -- nslookup                                       |                                         |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local              |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813001157-676638                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:14:30 UTC | Fri, 13 Aug 2021 00:14:30 UTC |
	|         | -- exec busybox-84b6686758-pzxgm                  |                                         |          |         |                               |                               |
	|         | -- nslookup                                       |                                         |          |         |                               |                               |
	|         | kubernetes.default.svc.cluster.local              |                                         |          |         |                               |                               |
	| kubectl | -p multinode-20210813001157-676638                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:14:30 UTC | Fri, 13 Aug 2021 00:14:30 UTC |
	|         | -- get pods -o                                    |                                         |          |         |                               |                               |
	|         | jsonpath='{.items[*].metadata.name}'              |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:14:30 UTC | Fri, 13 Aug 2021 00:14:30 UTC |
	|         | multinode-20210813001157-676638                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-j4hzl                          |                                         |          |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                         |          |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                         |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                         |          |         |                               |                               |
	| kubectl | -p                                                | multinode-20210813001157-676638         | jenkins  | v1.22.0 | Fri, 13 Aug 2021 00:14:31 UTC | Fri, 13 Aug 2021 00:14:31 UTC |
	|         | multinode-20210813001157-676638                   |                                         |          |         |                               |                               |
	|         | -- exec                                           |                                         |          |         |                               |                               |
	|         | busybox-84b6686758-pzxgm                          |                                         |          |         |                               |                               |
	|         | -- sh -c nslookup                                 |                                         |          |         |                               |                               |
	|         | host.minikube.internal | awk                      |                                         |          |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                           |                                         |          |         |                               |                               |
	|---------|---------------------------------------------------|-----------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 00:11:57
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 00:11:57.685741  743232 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:11:57.685836  743232 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:11:57.685840  743232 out.go:311] Setting ErrFile to fd 2...
	I0813 00:11:57.685843  743232 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:11:57.685952  743232 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:11:57.686248  743232 out.go:305] Setting JSON to false
	I0813 00:11:57.723987  743232 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":14079,"bootTime":1628799438,"procs":191,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:11:57.724153  743232 start.go:121] virtualization: kvm guest
	I0813 00:11:57.727362  743232 out.go:177] * [multinode-20210813001157-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:11:57.729248  743232 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:11:57.727568  743232 notify.go:169] Checking for updates...
	I0813 00:11:57.731285  743232 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:11:57.733297  743232 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:11:57.735217  743232 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:11:57.735546  743232 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:11:57.794062  743232 docker.go:132] docker version: linux-19.03.15
	I0813 00:11:57.794237  743232 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:11:57.881441  743232 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-13 00:11:57.832087457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:11:57.881578  743232 docker.go:244] overlay module found
	I0813 00:11:57.884067  743232 out.go:177] * Using the docker driver based on user configuration
	I0813 00:11:57.884107  743232 start.go:278] selected driver: docker
	I0813 00:11:57.884115  743232 start.go:751] validating driver "docker" against <nil>
	I0813 00:11:57.884140  743232 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 00:11:57.884198  743232 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:11:57.884217  743232 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 00:11:57.885840  743232 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:11:57.886881  743232 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:11:57.975240  743232 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-13 00:11:57.924760919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:11:57.975403  743232 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 00:11:57.975605  743232 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 00:11:57.975634  743232 cni.go:93] Creating CNI manager for ""
	I0813 00:11:57.975641  743232 cni.go:154] 0 nodes found, recommending kindnet
	I0813 00:11:57.975649  743232 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 00:11:57.975658  743232 start_flags.go:277] config:
	{Name:multinode-20210813001157-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813001157-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 00:11:57.978268  743232 out.go:177] * Starting control plane node multinode-20210813001157-676638 in cluster multinode-20210813001157-676638
	I0813 00:11:57.978345  743232 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 00:11:57.980262  743232 out.go:177] * Pulling base image ...
	I0813 00:11:57.980349  743232 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:11:57.980395  743232 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 00:11:57.980415  743232 cache.go:56] Caching tarball of preloaded images
	I0813 00:11:57.980461  743232 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 00:11:57.980649  743232 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 00:11:57.980664  743232 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 00:11:57.980989  743232 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/config.json ...
	I0813 00:11:57.981021  743232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/config.json: {Name:mk5b5e6ab38470bf28a6ce2b26dc208319bf2290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:11:58.072973  743232 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 00:11:58.073010  743232 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 00:11:58.073028  743232 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:11:58.073070  743232 start.go:313] acquiring machines lock for multinode-20210813001157-676638: {Name:mk3de987c1456e3db4b9becac5c16b696f46e805 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:11:58.073270  743232 start.go:317] acquired machines lock for "multinode-20210813001157-676638" in 175.553µs
	I0813 00:11:58.073305  743232 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813001157-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813001157-676638 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:11:58.073382  743232 start.go:126] createHost starting for "" (driver="docker")
	I0813 00:11:58.076349  743232 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 00:11:58.076596  743232 start.go:160] libmachine.API.Create for "multinode-20210813001157-676638" (driver="docker")
	I0813 00:11:58.076630  743232 client.go:168] LocalClient.Create starting
	I0813 00:11:58.076702  743232 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 00:11:58.076765  743232 main.go:130] libmachine: Decoding PEM data...
	I0813 00:11:58.076785  743232 main.go:130] libmachine: Parsing certificate...
	I0813 00:11:58.076881  743232 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 00:11:58.076899  743232 main.go:130] libmachine: Decoding PEM data...
	I0813 00:11:58.076909  743232 main.go:130] libmachine: Parsing certificate...
	I0813 00:11:58.077312  743232 cli_runner.go:115] Run: docker network inspect multinode-20210813001157-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 00:11:58.118195  743232 cli_runner.go:162] docker network inspect multinode-20210813001157-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 00:11:58.118337  743232 network_create.go:255] running [docker network inspect multinode-20210813001157-676638] to gather additional debugging logs...
	I0813 00:11:58.118363  743232 cli_runner.go:115] Run: docker network inspect multinode-20210813001157-676638
	W0813 00:11:58.158635  743232 cli_runner.go:162] docker network inspect multinode-20210813001157-676638 returned with exit code 1
	I0813 00:11:58.158673  743232 network_create.go:258] error running [docker network inspect multinode-20210813001157-676638]: docker network inspect multinode-20210813001157-676638: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20210813001157-676638
	I0813 00:11:58.158690  743232 network_create.go:260] output of [docker network inspect multinode-20210813001157-676638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20210813001157-676638
	
	** /stderr **
	I0813 00:11:58.158754  743232 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 00:11:58.201110  743232 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000109f0] misses:0}
	I0813 00:11:58.201188  743232 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 00:11:58.201212  743232 network_create.go:106] attempt to create docker network multinode-20210813001157-676638 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 00:11:58.201333  743232 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20210813001157-676638
	I0813 00:11:58.282374  743232 network_create.go:90] docker network multinode-20210813001157-676638 192.168.49.0/24 created
	I0813 00:11:58.282422  743232 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20210813001157-676638" container
	I0813 00:11:58.282481  743232 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 00:11:58.324617  743232 cli_runner.go:115] Run: docker volume create multinode-20210813001157-676638 --label name.minikube.sigs.k8s.io=multinode-20210813001157-676638 --label created_by.minikube.sigs.k8s.io=true
	I0813 00:11:58.367117  743232 oci.go:102] Successfully created a docker volume multinode-20210813001157-676638
	I0813 00:11:58.367221  743232 cli_runner.go:115] Run: docker run --rm --name multinode-20210813001157-676638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210813001157-676638 --entrypoint /usr/bin/test -v multinode-20210813001157-676638:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 00:11:59.123709  743232 oci.go:106] Successfully prepared a docker volume multinode-20210813001157-676638
	W0813 00:11:59.123778  743232 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 00:11:59.123786  743232 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 00:11:59.123839  743232 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 00:11:59.123846  743232 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:11:59.123881  743232 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 00:11:59.123944  743232 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210813001157-676638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 00:11:59.212010  743232 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210813001157-676638 --name multinode-20210813001157-676638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210813001157-676638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210813001157-676638 --network multinode-20210813001157-676638 --ip 192.168.49.2 --volume multinode-20210813001157-676638:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 00:11:59.723091  743232 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638 --format={{.State.Running}}
	I0813 00:11:59.773986  743232 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638 --format={{.State.Status}}
	I0813 00:11:59.823540  743232 cli_runner.go:115] Run: docker exec multinode-20210813001157-676638 stat /var/lib/dpkg/alternatives/iptables
	I0813 00:11:59.962909  743232 oci.go:278] the created container "multinode-20210813001157-676638" has a running status.
	I0813 00:11:59.962952  743232 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa...
	I0813 00:12:00.121660  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0813 00:12:00.121719  743232 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 00:12:00.484051  743232 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638 --format={{.State.Status}}
	I0813 00:12:00.532854  743232 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 00:12:00.532878  743232 kic_runner.go:115] Args: [docker exec --privileged multinode-20210813001157-676638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 00:12:02.843222  743232 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210813001157-676638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (3.719194451s)
	I0813 00:12:02.843262  743232 kic.go:188] duration metric: took 3.719377 seconds to extract preloaded images to volume
	I0813 00:12:02.843345  743232 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638 --format={{.State.Status}}
	I0813 00:12:02.884173  743232 machine.go:88] provisioning docker machine ...
	I0813 00:12:02.884226  743232 ubuntu.go:169] provisioning hostname "multinode-20210813001157-676638"
	I0813 00:12:02.884299  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:12:02.925586  743232 main.go:130] libmachine: Using SSH client type: native
	I0813 00:12:02.925798  743232 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33293 <nil> <nil>}
	I0813 00:12:02.925816  743232 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813001157-676638 && echo "multinode-20210813001157-676638" | sudo tee /etc/hostname
	I0813 00:12:03.074062  743232 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813001157-676638
	
	I0813 00:12:03.074155  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:12:03.115319  743232 main.go:130] libmachine: Using SSH client type: native
	I0813 00:12:03.115517  743232 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33293 <nil> <nil>}
	I0813 00:12:03.115548  743232 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813001157-676638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813001157-676638/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813001157-676638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:12:03.229429  743232 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:12:03.229471  743232 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:12:03.229581  743232 ubuntu.go:177] setting up certificates
	I0813 00:12:03.229603  743232 provision.go:83] configureAuth start
	I0813 00:12:03.229679  743232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813001157-676638
	I0813 00:12:03.269679  743232 provision.go:137] copyHostCerts
	I0813 00:12:03.269728  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:12:03.269757  743232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:12:03.269773  743232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:12:03.269831  743232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1082 bytes)
	I0813 00:12:03.269919  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:12:03.269948  743232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:12:03.269957  743232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:12:03.269982  743232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:12:03.270039  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:12:03.270061  743232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:12:03.270071  743232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:12:03.270096  743232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0813 00:12:03.270153  743232 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813001157-676638 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210813001157-676638]
	I0813 00:12:03.594513  743232 provision.go:171] copyRemoteCerts
	I0813 00:12:03.594590  743232 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:12:03.594631  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:12:03.633798  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa Username:docker}
	I0813 00:12:03.717297  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 00:12:03.717366  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 00:12:03.735492  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 00:12:03.735559  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 00:12:03.753273  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 00:12:03.753351  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0813 00:12:03.771028  743232 provision.go:86] duration metric: configureAuth took 541.405548ms
	I0813 00:12:03.771060  743232 ubuntu.go:193] setting minikube options for container-runtime
	I0813 00:12:03.771368  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:12:03.812509  743232 main.go:130] libmachine: Using SSH client type: native
	I0813 00:12:03.812750  743232 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33293 <nil> <nil>}
	I0813 00:12:03.812779  743232 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:12:04.188501  743232 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:12:04.188544  743232 machine.go:91] provisioned docker machine in 1.304338043s
	I0813 00:12:04.188556  743232 client.go:171] LocalClient.Create took 6.111918775s
	I0813 00:12:04.188578  743232 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813001157-676638" took 6.111981262s
	I0813 00:12:04.188594  743232 start.go:267] post-start starting for "multinode-20210813001157-676638" (driver="docker")
	I0813 00:12:04.188604  743232 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:12:04.188689  743232 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:12:04.188758  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:12:04.227939  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa Username:docker}
	I0813 00:12:04.313375  743232 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:12:04.316258  743232 command_runner.go:124] > NAME="Ubuntu"
	I0813 00:12:04.316284  743232 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0813 00:12:04.316289  743232 command_runner.go:124] > ID=ubuntu
	I0813 00:12:04.316293  743232 command_runner.go:124] > ID_LIKE=debian
	I0813 00:12:04.316298  743232 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0813 00:12:04.316310  743232 command_runner.go:124] > VERSION_ID="20.04"
	I0813 00:12:04.316316  743232 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0813 00:12:04.316321  743232 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0813 00:12:04.316334  743232 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0813 00:12:04.316346  743232 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0813 00:12:04.316356  743232 command_runner.go:124] > VERSION_CODENAME=focal
	I0813 00:12:04.316366  743232 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0813 00:12:04.316451  743232 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 00:12:04.316467  743232 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 00:12:04.316475  743232 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 00:12:04.316483  743232 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 00:12:04.316499  743232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:12:04.316547  743232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:12:04.316625  743232 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> 6766382.pem in /etc/ssl/certs
	I0813 00:12:04.316637  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> /etc/ssl/certs/6766382.pem
	I0813 00:12:04.316721  743232 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:12:04.323661  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:12:04.341543  743232 start.go:270] post-start completed in 152.92928ms
	I0813 00:12:04.341897  743232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813001157-676638
	I0813 00:12:04.381704  743232 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/config.json ...
	I0813 00:12:04.381946  743232 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:12:04.381995  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:12:04.422993  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa Username:docker}
	I0813 00:12:04.506156  743232 command_runner.go:124] > 30%!
	(MISSING)I0813 00:12:04.506199  743232 start.go:129] duration metric: createHost completed in 6.432806445s
	I0813 00:12:04.506210  743232 start.go:80] releasing machines lock for "multinode-20210813001157-676638", held for 6.432923544s
	I0813 00:12:04.506300  743232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813001157-676638
	I0813 00:12:04.546349  743232 ssh_runner.go:149] Run: systemctl --version
	I0813 00:12:04.546410  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:12:04.546445  743232 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:12:04.546548  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:12:04.589513  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa Username:docker}
	I0813 00:12:04.591312  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa Username:docker}
	I0813 00:12:04.736347  743232 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 00:12:04.736379  743232 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 00:12:04.736385  743232 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 00:12:04.736389  743232 command_runner.go:124] > The document has moved
	I0813 00:12:04.736396  743232 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 00:12:04.736400  743232 command_runner.go:124] > </BODY></HTML>
	I0813 00:12:04.737984  743232 command_runner.go:124] > systemd 245 (245.4-4ubuntu3.7)
	I0813 00:12:04.738015  743232 command_runner.go:124] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0813 00:12:04.738159  743232 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:12:04.764736  743232 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:12:04.774099  743232 docker.go:153] disabling docker service ...
	I0813 00:12:04.774151  743232 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:12:04.783914  743232 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:12:04.792981  743232 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:12:04.859112  743232 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 00:12:04.859178  743232 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:12:04.868989  743232 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 00:12:04.926533  743232 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:12:04.936400  743232 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:12:04.949142  743232 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:12:04.949181  743232 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:12:04.950318  743232 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 00:12:04.958470  743232 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 00:12:04.958503  743232 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 00:12:04.966689  743232 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:12:04.972524  743232 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:12:04.973089  743232 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:12:04.973143  743232 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:12:04.980370  743232 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:12:04.986561  743232 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:12:05.045942  743232 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:12:05.055780  743232 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:12:05.055847  743232 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:12:05.059084  743232 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 00:12:05.059109  743232 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 00:12:05.059116  743232 command_runner.go:124] > Device: 4eh/78d	Inode: 3505622     Links: 1
	I0813 00:12:05.059124  743232 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:12:05.059129  743232 command_runner.go:124] > Access: 2021-08-13 00:12:04.175594023 +0000
	I0813 00:12:05.059137  743232 command_runner.go:124] > Modify: 2021-08-13 00:12:04.175594023 +0000
	I0813 00:12:05.059146  743232 command_runner.go:124] > Change: 2021-08-13 00:12:04.175594023 +0000
	I0813 00:12:05.059155  743232 command_runner.go:124] >  Birth: -
	I0813 00:12:05.059179  743232 start.go:417] Will wait 60s for crictl version
	I0813 00:12:05.059243  743232 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:12:05.087259  743232 command_runner.go:124] > Version:  0.1.0
	I0813 00:12:05.087284  743232 command_runner.go:124] > RuntimeName:  cri-o
	I0813 00:12:05.087291  743232 command_runner.go:124] > RuntimeVersion:  1.20.3
	I0813 00:12:05.087299  743232 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 00:12:05.088920  743232 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 00:12:05.088990  743232 ssh_runner.go:149] Run: crio --version
	I0813 00:12:05.149116  743232 command_runner.go:124] > crio version 1.20.3
	I0813 00:12:05.149143  743232 command_runner.go:124] > Version:       1.20.3
	I0813 00:12:05.149155  743232 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0813 00:12:05.149162  743232 command_runner.go:124] > GitTreeState:  clean
	I0813 00:12:05.149172  743232 command_runner.go:124] > BuildDate:     2021-06-03T20:25:45Z
	I0813 00:12:05.149179  743232 command_runner.go:124] > GoVersion:     go1.15.2
	I0813 00:12:05.149185  743232 command_runner.go:124] > Compiler:      gc
	I0813 00:12:05.149192  743232 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:12:05.149197  743232 command_runner.go:124] > Linkmode:      dynamic
	I0813 00:12:05.150595  743232 command_runner.go:124] ! time="2021-08-13T00:12:05Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 00:12:05.150703  743232 ssh_runner.go:149] Run: crio --version
	I0813 00:12:05.214304  743232 command_runner.go:124] > crio version 1.20.3
	I0813 00:12:05.214328  743232 command_runner.go:124] > Version:       1.20.3
	I0813 00:12:05.214349  743232 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0813 00:12:05.214355  743232 command_runner.go:124] > GitTreeState:  clean
	I0813 00:12:05.214364  743232 command_runner.go:124] > BuildDate:     2021-06-03T20:25:45Z
	I0813 00:12:05.214370  743232 command_runner.go:124] > GoVersion:     go1.15.2
	I0813 00:12:05.214376  743232 command_runner.go:124] > Compiler:      gc
	I0813 00:12:05.214384  743232 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:12:05.214395  743232 command_runner.go:124] > Linkmode:      dynamic
	I0813 00:12:05.215511  743232 command_runner.go:124] ! time="2021-08-13T00:12:05Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 00:12:05.219152  743232 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 00:12:05.219237  743232 cli_runner.go:115] Run: docker network inspect multinode-20210813001157-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 00:12:05.258181  743232 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 00:12:05.261697  743232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:12:05.271475  743232 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:12:05.271553  743232 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:12:05.317021  743232 command_runner.go:124] > {
	I0813 00:12:05.317050  743232 command_runner.go:124] >   "images": [
	I0813 00:12:05.317056  743232 command_runner.go:124] >     {
	I0813 00:12:05.317068  743232 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 00:12:05.317076  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.317089  743232 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 00:12:05.317095  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317100  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.317109  743232 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 00:12:05.317161  743232 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 00:12:05.317170  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317178  743232 command_runner.go:124] >       "size": "119984626",
	I0813 00:12:05.317186  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.317193  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.317199  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.317207  743232 command_runner.go:124] >     },
	I0813 00:12:05.317216  743232 command_runner.go:124] >     {
	I0813 00:12:05.317264  743232 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 00:12:05.317275  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.317288  743232 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 00:12:05.317297  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317303  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.317319  743232 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 00:12:05.317338  743232 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 00:12:05.317346  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317352  743232 command_runner.go:124] >       "size": "228528983",
	I0813 00:12:05.317356  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.317365  743232 command_runner.go:124] >       "username": "nonroot",
	I0813 00:12:05.317378  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.317386  743232 command_runner.go:124] >     },
	I0813 00:12:05.317394  743232 command_runner.go:124] >     {
	I0813 00:12:05.317408  743232 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 00:12:05.317417  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.317429  743232 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 00:12:05.317437  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317441  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.317452  743232 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 00:12:05.317469  743232 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 00:12:05.317477  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317488  743232 command_runner.go:124] >       "size": "36950651",
	I0813 00:12:05.317499  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.317508  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.317517  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.317520  743232 command_runner.go:124] >     },
	I0813 00:12:05.317527  743232 command_runner.go:124] >     {
	I0813 00:12:05.317540  743232 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 00:12:05.317549  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.317560  743232 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 00:12:05.317569  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317579  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.317595  743232 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 00:12:05.317606  743232 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 00:12:05.317610  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317617  743232 command_runner.go:124] >       "size": "31470524",
	I0813 00:12:05.317631  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.317641  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.317650  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.317659  743232 command_runner.go:124] >     },
	I0813 00:12:05.317667  743232 command_runner.go:124] >     {
	I0813 00:12:05.317682  743232 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 00:12:05.317690  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.317696  743232 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 00:12:05.317704  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317713  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.317729  743232 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 00:12:05.317744  743232 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 00:12:05.317752  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317762  743232 command_runner.go:124] >       "size": "42585056",
	I0813 00:12:05.317769  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.317773  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.317777  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.317782  743232 command_runner.go:124] >     },
	I0813 00:12:05.317790  743232 command_runner.go:124] >     {
	I0813 00:12:05.317805  743232 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 00:12:05.317814  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.317825  743232 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 00:12:05.317833  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317842  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.317855  743232 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 00:12:05.317868  743232 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 00:12:05.317877  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317887  743232 command_runner.go:124] >       "size": "254662613",
	I0813 00:12:05.317896  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.317908  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.317917  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.317925  743232 command_runner.go:124] >     },
	I0813 00:12:05.317933  743232 command_runner.go:124] >     {
	I0813 00:12:05.317940  743232 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 00:12:05.317947  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.317955  743232 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 00:12:05.317964  743232 command_runner.go:124] >       ],
	I0813 00:12:05.317973  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.317988  743232 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 00:12:05.318004  743232 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 00:12:05.318012  743232 command_runner.go:124] >       ],
	I0813 00:12:05.318020  743232 command_runner.go:124] >       "size": "126878961",
	I0813 00:12:05.318023  743232 command_runner.go:124] >       "uid": {
	I0813 00:12:05.318032  743232 command_runner.go:124] >         "value": "0"
	I0813 00:12:05.318042  743232 command_runner.go:124] >       },
	I0813 00:12:05.318053  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.318062  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.318068  743232 command_runner.go:124] >     },
	I0813 00:12:05.318076  743232 command_runner.go:124] >     {
	I0813 00:12:05.318090  743232 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 00:12:05.318098  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.318107  743232 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 00:12:05.318113  743232 command_runner.go:124] >       ],
	I0813 00:12:05.318120  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.318136  743232 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 00:12:05.318179  743232 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 00:12:05.318184  743232 command_runner.go:124] >       ],
	I0813 00:12:05.318194  743232 command_runner.go:124] >       "size": "121087578",
	I0813 00:12:05.318203  743232 command_runner.go:124] >       "uid": {
	I0813 00:12:05.318212  743232 command_runner.go:124] >         "value": "0"
	I0813 00:12:05.318220  743232 command_runner.go:124] >       },
	I0813 00:12:05.318235  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.318245  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.318253  743232 command_runner.go:124] >     },
	I0813 00:12:05.318258  743232 command_runner.go:124] >     {
	I0813 00:12:05.318271  743232 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 00:12:05.318277  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.318284  743232 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 00:12:05.318294  743232 command_runner.go:124] >       ],
	I0813 00:12:05.318304  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.318319  743232 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 00:12:05.318333  743232 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 00:12:05.318341  743232 command_runner.go:124] >       ],
	I0813 00:12:05.318348  743232 command_runner.go:124] >       "size": "105129702",
	I0813 00:12:05.318356  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.318360  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.318368  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.318377  743232 command_runner.go:124] >     },
	I0813 00:12:05.318385  743232 command_runner.go:124] >     {
	I0813 00:12:05.318398  743232 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 00:12:05.318408  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.318420  743232 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 00:12:05.318429  743232 command_runner.go:124] >       ],
	I0813 00:12:05.318436  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.318447  743232 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 00:12:05.318461  743232 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 00:12:05.318470  743232 command_runner.go:124] >       ],
	I0813 00:12:05.318480  743232 command_runner.go:124] >       "size": "51893338",
	I0813 00:12:05.318488  743232 command_runner.go:124] >       "uid": {
	I0813 00:12:05.318498  743232 command_runner.go:124] >         "value": "0"
	I0813 00:12:05.318506  743232 command_runner.go:124] >       },
	I0813 00:12:05.318516  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.318521  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.318527  743232 command_runner.go:124] >     },
	I0813 00:12:05.318530  743232 command_runner.go:124] >     {
	I0813 00:12:05.318542  743232 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 00:12:05.318552  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.318562  743232 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 00:12:05.318571  743232 command_runner.go:124] >       ],
	I0813 00:12:05.318580  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.318596  743232 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 00:12:05.318609  743232 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 00:12:05.318615  743232 command_runner.go:124] >       ],
	I0813 00:12:05.318622  743232 command_runner.go:124] >       "size": "689817",
	I0813 00:12:05.318631  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.318641  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.318650  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.318661  743232 command_runner.go:124] >     }
	I0813 00:12:05.318669  743232 command_runner.go:124] >   ]
	I0813 00:12:05.318677  743232 command_runner.go:124] > }
	I0813 00:12:05.318907  743232 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:12:05.318923  743232 crio.go:333] Images already preloaded, skipping extraction
	I0813 00:12:05.318971  743232 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:12:05.340980  743232 command_runner.go:124] > {
	I0813 00:12:05.341007  743232 command_runner.go:124] >   "images": [
	I0813 00:12:05.341012  743232 command_runner.go:124] >     {
	I0813 00:12:05.341022  743232 command_runner.go:124] >       "id": "6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb",
	I0813 00:12:05.341027  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.341037  743232 command_runner.go:124] >         "docker.io/kindest/kindnetd:v20210326-1e038dc5"
	I0813 00:12:05.341041  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341046  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.341057  743232 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1",
	I0813 00:12:05.341076  743232 command_runner.go:124] >         "docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"
	I0813 00:12:05.341082  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341092  743232 command_runner.go:124] >       "size": "119984626",
	I0813 00:12:05.341101  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.341111  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.341120  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.341125  743232 command_runner.go:124] >     },
	I0813 00:12:05.341129  743232 command_runner.go:124] >     {
	I0813 00:12:05.341136  743232 command_runner.go:124] >       "id": "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db",
	I0813 00:12:05.341144  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.341150  743232 command_runner.go:124] >         "docker.io/kubernetesui/dashboard:v2.1.0"
	I0813 00:12:05.341156  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341160  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.341171  743232 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f",
	I0813 00:12:05.341182  743232 command_runner.go:124] >         "docker.io/kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6"
	I0813 00:12:05.341188  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341196  743232 command_runner.go:124] >       "size": "228528983",
	I0813 00:12:05.341202  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.341207  743232 command_runner.go:124] >       "username": "nonroot",
	I0813 00:12:05.341216  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.341253  743232 command_runner.go:124] >     },
	I0813 00:12:05.341260  743232 command_runner.go:124] >     {
	I0813 00:12:05.341274  743232 command_runner.go:124] >       "id": "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4",
	I0813 00:12:05.341284  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.341295  743232 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper:v1.0.4"
	I0813 00:12:05.341303  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341312  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.341322  743232 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf",
	I0813 00:12:05.341333  743232 command_runner.go:124] >         "docker.io/kubernetesui/metrics-scraper@sha256:d78f995c07124874c2a2e9b404cffa6bc6233668d63d6c6210574971f3d5914b"
	I0813 00:12:05.341340  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341349  743232 command_runner.go:124] >       "size": "36950651",
	I0813 00:12:05.341353  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.341357  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.341364  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.341367  743232 command_runner.go:124] >     },
	I0813 00:12:05.341373  743232 command_runner.go:124] >     {
	I0813 00:12:05.341381  743232 command_runner.go:124] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0813 00:12:05.341388  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.341394  743232 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0813 00:12:05.341399  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341404  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.341414  743232 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0813 00:12:05.341425  743232 command_runner.go:124] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0813 00:12:05.341430  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341434  743232 command_runner.go:124] >       "size": "31470524",
	I0813 00:12:05.341446  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.341453  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.341457  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.341463  743232 command_runner.go:124] >     },
	I0813 00:12:05.341466  743232 command_runner.go:124] >     {
	I0813 00:12:05.341475  743232 command_runner.go:124] >       "id": "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899",
	I0813 00:12:05.341481  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.341486  743232 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns:v1.8.0"
	I0813 00:12:05.341492  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341496  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.341506  743232 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61",
	I0813 00:12:05.341518  743232 command_runner.go:124] >         "k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e"
	I0813 00:12:05.341525  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341529  743232 command_runner.go:124] >       "size": "42585056",
	I0813 00:12:05.341532  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.341536  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.341543  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.341547  743232 command_runner.go:124] >     },
	I0813 00:12:05.341550  743232 command_runner.go:124] >     {
	I0813 00:12:05.341556  743232 command_runner.go:124] >       "id": "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934",
	I0813 00:12:05.341563  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.341568  743232 command_runner.go:124] >         "k8s.gcr.io/etcd:3.4.13-0"
	I0813 00:12:05.341573  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341577  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.341589  743232 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2",
	I0813 00:12:05.341599  743232 command_runner.go:124] >         "k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a"
	I0813 00:12:05.341604  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341609  743232 command_runner.go:124] >       "size": "254662613",
	I0813 00:12:05.341615  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.341621  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.341627  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.341631  743232 command_runner.go:124] >     },
	I0813 00:12:05.341634  743232 command_runner.go:124] >     {
	I0813 00:12:05.341641  743232 command_runner.go:124] >       "id": "3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80",
	I0813 00:12:05.341647  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.341655  743232 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver:v1.21.3"
	I0813 00:12:05.341661  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341665  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.341675  743232 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:7950be952e1bf5fea24bd8deb79dd871b92d7f2ae02751467670ed9e54fa27c2",
	I0813 00:12:05.341686  743232 command_runner.go:124] >         "k8s.gcr.io/kube-apiserver@sha256:910cfdf034262c7b68ecb17c0885f39bdaaad07d87c9a5b6320819d8500b7ee5"
	I0813 00:12:05.341690  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341700  743232 command_runner.go:124] >       "size": "126878961",
	I0813 00:12:05.341707  743232 command_runner.go:124] >       "uid": {
	I0813 00:12:05.341711  743232 command_runner.go:124] >         "value": "0"
	I0813 00:12:05.341714  743232 command_runner.go:124] >       },
	I0813 00:12:05.341718  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.341722  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.341725  743232 command_runner.go:124] >     },
	I0813 00:12:05.341729  743232 command_runner.go:124] >     {
	I0813 00:12:05.341735  743232 command_runner.go:124] >       "id": "bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9",
	I0813 00:12:05.341743  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.341752  743232 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager:v1.21.3"
	I0813 00:12:05.341758  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341762  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.341772  743232 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:020336b75c4893f1849758800d6f98bb2718faf3e5c812f91ce9fc4dfb69543b",
	I0813 00:12:05.341782  743232 command_runner.go:124] >         "k8s.gcr.io/kube-controller-manager@sha256:7fb1f6614597c255b475ed8abf553e0d4e8ea211b06a90bed53eaddcfb9c354f"
	I0813 00:12:05.341788  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341822  743232 command_runner.go:124] >       "size": "121087578",
	I0813 00:12:05.341832  743232 command_runner.go:124] >       "uid": {
	I0813 00:12:05.341836  743232 command_runner.go:124] >         "value": "0"
	I0813 00:12:05.341839  743232 command_runner.go:124] >       },
	I0813 00:12:05.341854  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.341865  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.341870  743232 command_runner.go:124] >     },
	I0813 00:12:05.341875  743232 command_runner.go:124] >     {
	I0813 00:12:05.341887  743232 command_runner.go:124] >       "id": "adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92",
	I0813 00:12:05.341895  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.341902  743232 command_runner.go:124] >         "k8s.gcr.io/kube-proxy:v1.21.3"
	I0813 00:12:05.341913  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341918  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.341933  743232 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:af5c9bacb913b5751d2d94e11dfd4e183e97b1a4afce282be95ce177f4a0100b",
	I0813 00:12:05.341946  743232 command_runner.go:124] >         "k8s.gcr.io/kube-proxy@sha256:c7778d7b97b2a822c3fe3e921d104ac42afbd38268de8df03557465780886627"
	I0813 00:12:05.341954  743232 command_runner.go:124] >       ],
	I0813 00:12:05.341961  743232 command_runner.go:124] >       "size": "105129702",
	I0813 00:12:05.341971  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.341976  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.341985  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.341994  743232 command_runner.go:124] >     },
	I0813 00:12:05.342003  743232 command_runner.go:124] >     {
	I0813 00:12:05.342016  743232 command_runner.go:124] >       "id": "6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a",
	I0813 00:12:05.342025  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.342036  743232 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler:v1.21.3"
	I0813 00:12:05.342044  743232 command_runner.go:124] >       ],
	I0813 00:12:05.342050  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.342063  743232 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:65aabc4434c565672db176e0f0e84f0ff5751dc446097f5c0ec3bf5d22bdb6c4",
	I0813 00:12:05.342091  743232 command_runner.go:124] >         "k8s.gcr.io/kube-scheduler@sha256:b61779ea1bd936c137b25b3a7baa5551fbbd84fed8568d15c7c85ab1139521c0"
	I0813 00:12:05.342102  743232 command_runner.go:124] >       ],
	I0813 00:12:05.342112  743232 command_runner.go:124] >       "size": "51893338",
	I0813 00:12:05.342121  743232 command_runner.go:124] >       "uid": {
	I0813 00:12:05.342131  743232 command_runner.go:124] >         "value": "0"
	I0813 00:12:05.342139  743232 command_runner.go:124] >       },
	I0813 00:12:05.342149  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.342155  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.342164  743232 command_runner.go:124] >     },
	I0813 00:12:05.342173  743232 command_runner.go:124] >     {
	I0813 00:12:05.342187  743232 command_runner.go:124] >       "id": "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253",
	I0813 00:12:05.342203  743232 command_runner.go:124] >       "repoTags": [
	I0813 00:12:05.342211  743232 command_runner.go:124] >         "k8s.gcr.io/pause:3.4.1"
	I0813 00:12:05.342218  743232 command_runner.go:124] >       ],
	I0813 00:12:05.342228  743232 command_runner.go:124] >       "repoDigests": [
	I0813 00:12:05.342243  743232 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810",
	I0813 00:12:05.342257  743232 command_runner.go:124] >         "k8s.gcr.io/pause@sha256:914e745e524aa94315a25b49a7fafc0aa395e332126930593225d7a513f5a6b2"
	I0813 00:12:05.342264  743232 command_runner.go:124] >       ],
	I0813 00:12:05.342271  743232 command_runner.go:124] >       "size": "689817",
	I0813 00:12:05.342279  743232 command_runner.go:124] >       "uid": null,
	I0813 00:12:05.342284  743232 command_runner.go:124] >       "username": "",
	I0813 00:12:05.342293  743232 command_runner.go:124] >       "spec": null
	I0813 00:12:05.342305  743232 command_runner.go:124] >     }
	I0813 00:12:05.342312  743232 command_runner.go:124] >   ]
	I0813 00:12:05.342317  743232 command_runner.go:124] > }
	I0813 00:12:05.342982  743232 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:12:05.343003  743232 cache_images.go:74] Images are preloaded, skipping loading
	I0813 00:12:05.343075  743232 ssh_runner.go:149] Run: crio config
	I0813 00:12:05.409369  743232 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 00:12:05.409402  743232 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 00:12:05.409413  743232 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 00:12:05.409418  743232 command_runner.go:124] > #
	I0813 00:12:05.409430  743232 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 00:12:05.409439  743232 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 00:12:05.409450  743232 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 00:12:05.409488  743232 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 00:12:05.409500  743232 command_runner.go:124] > # reload'.
	I0813 00:12:05.409511  743232 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 00:12:05.409524  743232 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 00:12:05.409541  743232 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 00:12:05.409555  743232 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 00:12:05.409560  743232 command_runner.go:124] > [crio]
	I0813 00:12:05.409568  743232 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 00:12:05.409580  743232 command_runner.go:124] > # containers images, in this directory.
	I0813 00:12:05.409591  743232 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 00:12:05.409614  743232 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 00:12:05.409625  743232 command_runner.go:124] > #runroot = "/run/containers/storage"
	I0813 00:12:05.409641  743232 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 00:12:05.409672  743232 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 00:12:05.409682  743232 command_runner.go:124] > #storage_driver = "overlay"
	I0813 00:12:05.409692  743232 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 00:12:05.409707  743232 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 00:12:05.409717  743232 command_runner.go:124] > #storage_option = [
	I0813 00:12:05.409724  743232 command_runner.go:124] > #	"overlay.mountopt=nodev",
	I0813 00:12:05.409732  743232 command_runner.go:124] > #]
	I0813 00:12:05.409743  743232 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 00:12:05.409755  743232 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 00:12:05.409762  743232 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 00:12:05.409770  743232 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 00:12:05.409790  743232 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 00:12:05.409801  743232 command_runner.go:124] > # always happen on a node reboot
	I0813 00:12:05.409811  743232 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 00:12:05.409823  743232 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 00:12:05.409836  743232 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 00:12:05.409850  743232 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 00:12:05.409863  743232 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 00:12:05.409875  743232 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 00:12:05.409884  743232 command_runner.go:124] > [crio.api]
	I0813 00:12:05.409896  743232 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 00:12:05.409906  743232 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 00:12:05.409919  743232 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 00:12:05.409930  743232 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 00:12:05.409943  743232 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 00:12:05.409953  743232 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 00:12:05.409960  743232 command_runner.go:124] > stream_port = "0"
	I0813 00:12:05.409971  743232 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 00:12:05.409981  743232 command_runner.go:124] > stream_enable_tls = false
	I0813 00:12:05.409993  743232 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 00:12:05.410002  743232 command_runner.go:124] > stream_idle_timeout = ""
	I0813 00:12:05.410015  743232 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 00:12:05.410028  743232 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 00:12:05.410036  743232 command_runner.go:124] > # minutes.
	I0813 00:12:05.410042  743232 command_runner.go:124] > stream_tls_cert = ""
	I0813 00:12:05.410059  743232 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 00:12:05.410072  743232 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 00:12:05.410081  743232 command_runner.go:124] > stream_tls_key = ""
	I0813 00:12:05.410093  743232 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 00:12:05.410107  743232 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 00:12:05.410119  743232 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 00:12:05.410129  743232 command_runner.go:124] > stream_tls_ca = ""
	I0813 00:12:05.410145  743232 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:12:05.410198  743232 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 00:12:05.410218  743232 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:12:05.410225  743232 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 00:12:05.410234  743232 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 00:12:05.410242  743232 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 00:12:05.410248  743232 command_runner.go:124] > [crio.runtime]
	I0813 00:12:05.410256  743232 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 00:12:05.410265  743232 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 00:12:05.410281  743232 command_runner.go:124] > # "nofile=1024:2048"
	I0813 00:12:05.410290  743232 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 00:12:05.410300  743232 command_runner.go:124] > #default_ulimits = [
	I0813 00:12:05.410305  743232 command_runner.go:124] > #]
	I0813 00:12:05.410318  743232 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 00:12:05.410324  743232 command_runner.go:124] > no_pivot = false
	I0813 00:12:05.410330  743232 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 00:12:05.410349  743232 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 00:12:05.410356  743232 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 00:12:05.410373  743232 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 00:12:05.410378  743232 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 00:12:05.410381  743232 command_runner.go:124] > conmon = ""
	I0813 00:12:05.410385  743232 command_runner.go:124] > # Cgroup setting for conmon
	I0813 00:12:05.410389  743232 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 00:12:05.410396  743232 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 00:12:05.410401  743232 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 00:12:05.410405  743232 command_runner.go:124] > conmon_env = [
	I0813 00:12:05.410411  743232 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 00:12:05.410417  743232 command_runner.go:124] > ]
	I0813 00:12:05.410422  743232 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 00:12:05.410427  743232 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 00:12:05.410433  743232 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 00:12:05.410440  743232 command_runner.go:124] > default_env = [
	I0813 00:12:05.410446  743232 command_runner.go:124] > ]
	I0813 00:12:05.410454  743232 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 00:12:05.410458  743232 command_runner.go:124] > selinux = false
	I0813 00:12:05.410465  743232 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 00:12:05.410475  743232 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 00:12:05.410482  743232 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 00:12:05.410489  743232 command_runner.go:124] > seccomp_profile = ""
	I0813 00:12:05.410510  743232 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 00:12:05.410521  743232 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 00:12:05.410527  743232 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 00:12:05.410534  743232 command_runner.go:124] > # which might increase security.
	I0813 00:12:05.410539  743232 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 00:12:05.410546  743232 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 00:12:05.410555  743232 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 00:12:05.410565  743232 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 00:12:05.410577  743232 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 00:12:05.410585  743232 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:12:05.410589  743232 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 00:12:05.410596  743232 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 00:12:05.410602  743232 command_runner.go:124] > # irqbalance daemon.
	I0813 00:12:05.410610  743232 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 00:12:05.410617  743232 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 00:12:05.410622  743232 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 00:12:05.410628  743232 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 00:12:05.410634  743232 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 00:12:05.410641  743232 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 00:12:05.410654  743232 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 00:12:05.410661  743232 command_runner.go:124] > # will be added.
	I0813 00:12:05.410665  743232 command_runner.go:124] > default_capabilities = [
	I0813 00:12:05.410673  743232 command_runner.go:124] > 	"CHOWN",
	I0813 00:12:05.410677  743232 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 00:12:05.410681  743232 command_runner.go:124] > 	"FSETID",
	I0813 00:12:05.410684  743232 command_runner.go:124] > 	"FOWNER",
	I0813 00:12:05.410688  743232 command_runner.go:124] > 	"SETGID",
	I0813 00:12:05.410691  743232 command_runner.go:124] > 	"SETUID",
	I0813 00:12:05.410694  743232 command_runner.go:124] > 	"SETPCAP",
	I0813 00:12:05.410699  743232 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 00:12:05.410705  743232 command_runner.go:124] > 	"KILL",
	I0813 00:12:05.410709  743232 command_runner.go:124] > ]
	I0813 00:12:05.410717  743232 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 00:12:05.410726  743232 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:12:05.410730  743232 command_runner.go:124] > default_sysctls = [
	I0813 00:12:05.410736  743232 command_runner.go:124] > ]
	I0813 00:12:05.410744  743232 command_runner.go:124] > # List of additional devices. specified as
	I0813 00:12:05.410752  743232 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 00:12:05.410759  743232 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 00:12:05.410765  743232 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:12:05.410769  743232 command_runner.go:124] > additional_devices = [
	I0813 00:12:05.410772  743232 command_runner.go:124] > ]
	I0813 00:12:05.410778  743232 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 00:12:05.410784  743232 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 00:12:05.410787  743232 command_runner.go:124] > hooks_dir = [
	I0813 00:12:05.410792  743232 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 00:12:05.410794  743232 command_runner.go:124] > ]
	I0813 00:12:05.410800  743232 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 00:12:05.410807  743232 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 00:12:05.410812  743232 command_runner.go:124] > # its default mounts from the following two files:
	I0813 00:12:05.410814  743232 command_runner.go:124] > #
	I0813 00:12:05.410821  743232 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 00:12:05.410856  743232 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 00:12:05.410864  743232 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 00:12:05.410867  743232 command_runner.go:124] > #
	I0813 00:12:05.410874  743232 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 00:12:05.410883  743232 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 00:12:05.410896  743232 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 00:12:05.410907  743232 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 00:12:05.410915  743232 command_runner.go:124] > #
	I0813 00:12:05.410921  743232 command_runner.go:124] > #default_mounts_file = ""
	I0813 00:12:05.410932  743232 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 00:12:05.410940  743232 command_runner.go:124] > pids_limit = 1024
	I0813 00:12:05.410949  743232 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 00:12:05.410960  743232 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 00:12:05.410970  743232 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 00:12:05.410974  743232 command_runner.go:124] > # limit is never exceeded.
	I0813 00:12:05.410978  743232 command_runner.go:124] > log_size_max = -1
	I0813 00:12:05.410996  743232 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 00:12:05.411005  743232 command_runner.go:124] > log_to_journald = false
	I0813 00:12:05.411015  743232 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 00:12:05.411027  743232 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 00:12:05.411039  743232 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 00:12:05.411050  743232 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 00:12:05.411064  743232 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 00:12:05.411070  743232 command_runner.go:124] > bind_mount_prefix = ""
	I0813 00:12:05.411082  743232 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 00:12:05.411091  743232 command_runner.go:124] > read_only = false
	I0813 00:12:05.411101  743232 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 00:12:05.411114  743232 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 00:12:05.411124  743232 command_runner.go:124] > # live configuration reload.
	I0813 00:12:05.411129  743232 command_runner.go:124] > log_level = "info"
	I0813 00:12:05.411135  743232 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 00:12:05.411143  743232 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:12:05.411147  743232 command_runner.go:124] > log_filter = ""
	I0813 00:12:05.411156  743232 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 00:12:05.411163  743232 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 00:12:05.411169  743232 command_runner.go:124] > # separated by comma.
	I0813 00:12:05.411173  743232 command_runner.go:124] > uid_mappings = ""
	I0813 00:12:05.411180  743232 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 00:12:05.411188  743232 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 00:12:05.411194  743232 command_runner.go:124] > # separated by comma.
	I0813 00:12:05.411203  743232 command_runner.go:124] > gid_mappings = ""
	I0813 00:12:05.411214  743232 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 00:12:05.411227  743232 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 00:12:05.411237  743232 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 00:12:05.411243  743232 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 00:12:05.411252  743232 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 00:12:05.411259  743232 command_runner.go:124] > # and manage their lifecycle.
	I0813 00:12:05.411268  743232 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 00:12:05.411272  743232 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 00:12:05.411279  743232 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 00:12:05.411292  743232 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 00:12:05.411304  743232 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 00:12:05.411312  743232 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 00:12:05.411318  743232 command_runner.go:124] > drop_infra_ctr = false
	I0813 00:12:05.411328  743232 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 00:12:05.411335  743232 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 00:12:05.411346  743232 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 00:12:05.411356  743232 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 00:12:05.411365  743232 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 00:12:05.411377  743232 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 00:12:05.411388  743232 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 00:12:05.411406  743232 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 00:12:05.411414  743232 command_runner.go:124] > pinns_path = ""
	I0813 00:12:05.411429  743232 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 00:12:05.411438  743232 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 00:12:05.411447  743232 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 00:12:05.411456  743232 command_runner.go:124] > default_runtime = "runc"
	I0813 00:12:05.411467  743232 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 00:12:05.411481  743232 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 00:12:05.411494  743232 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 00:12:05.411507  743232 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 00:12:05.411515  743232 command_runner.go:124] > #
	I0813 00:12:05.411522  743232 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 00:12:05.411531  743232 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 00:12:05.411538  743232 command_runner.go:124] > #  runtime_type = "oci"
	I0813 00:12:05.411546  743232 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 00:12:05.411557  743232 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 00:12:05.411567  743232 command_runner.go:124] > #  allowed_annotations = []
	I0813 00:12:05.411579  743232 command_runner.go:124] > # Where:
	I0813 00:12:05.411591  743232 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 00:12:05.411609  743232 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 00:12:05.411618  743232 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 00:12:05.411631  743232 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 00:12:05.411641  743232 command_runner.go:124] > #   in $PATH.
	I0813 00:12:05.411659  743232 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 00:12:05.411670  743232 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 00:12:05.411683  743232 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 00:12:05.411691  743232 command_runner.go:124] > #   state.
	I0813 00:12:05.411701  743232 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 00:12:05.411709  743232 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 00:12:05.411722  743232 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 00:12:05.411736  743232 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 00:12:05.411747  743232 command_runner.go:124] > #   The currently recognized values are:
	I0813 00:12:05.411757  743232 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 00:12:05.411770  743232 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 00:12:05.411781  743232 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 00:12:05.411791  743232 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 00:12:05.411803  743232 command_runner.go:124] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0813 00:12:05.411815  743232 command_runner.go:124] > runtime_type = "oci"
	I0813 00:12:05.411825  743232 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 00:12:05.411838  743232 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 00:12:05.411847  743232 command_runner.go:124] > # running containers
	I0813 00:12:05.411857  743232 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 00:12:05.411865  743232 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 00:12:05.411877  743232 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 00:12:05.411889  743232 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 00:12:05.411901  743232 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 00:12:05.411911  743232 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 00:12:05.411921  743232 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 00:12:05.411931  743232 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 00:12:05.411941  743232 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 00:12:05.411949  743232 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 00:12:05.411957  743232 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 00:12:05.411965  743232 command_runner.go:124] > #
	I0813 00:12:05.411976  743232 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 00:12:05.411990  743232 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 00:12:05.412003  743232 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 00:12:05.412018  743232 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 00:12:05.412030  743232 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 00:12:05.412037  743232 command_runner.go:124] > [crio.image]
	I0813 00:12:05.412043  743232 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 00:12:05.412053  743232 command_runner.go:124] > default_transport = "docker://"
	I0813 00:12:05.412066  743232 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 00:12:05.412079  743232 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:12:05.412089  743232 command_runner.go:124] > global_auth_file = ""
	I0813 00:12:05.412097  743232 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 00:12:05.412107  743232 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:12:05.412116  743232 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 00:12:05.412127  743232 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 00:12:05.412139  743232 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:12:05.412150  743232 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:12:05.412160  743232 command_runner.go:124] > pause_image_auth_file = ""
	I0813 00:12:05.412173  743232 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 00:12:05.412188  743232 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 00:12:05.412201  743232 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 00:12:05.412215  743232 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 00:12:05.412225  743232 command_runner.go:124] > pause_command = "/pause"
	I0813 00:12:05.412238  743232 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 00:12:05.412251  743232 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 00:12:05.412264  743232 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 00:12:05.412277  743232 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 00:12:05.412286  743232 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 00:12:05.412292  743232 command_runner.go:124] > signature_policy = ""
	I0813 00:12:05.412302  743232 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 00:12:05.412316  743232 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 00:12:05.412326  743232 command_runner.go:124] > # changing them here.
	I0813 00:12:05.412335  743232 command_runner.go:124] > #insecure_registries = "[]"
	I0813 00:12:05.412348  743232 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 00:12:05.412359  743232 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 00:12:05.412368  743232 command_runner.go:124] > image_volumes = "mkdir"
	I0813 00:12:05.412374  743232 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 00:12:05.412388  743232 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 00:12:05.412407  743232 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 00:12:05.412419  743232 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 00:12:05.412430  743232 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 00:12:05.412438  743232 command_runner.go:124] > #registries = [
	I0813 00:12:05.412448  743232 command_runner.go:124] > # ]
	I0813 00:12:05.412457  743232 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 00:12:05.412465  743232 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 00:12:05.412478  743232 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 00:12:05.412487  743232 command_runner.go:124] > # CNI plugins.
	I0813 00:12:05.412496  743232 command_runner.go:124] > [crio.network]
	I0813 00:12:05.412509  743232 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 00:12:05.412520  743232 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 00:12:05.412530  743232 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 00:12:05.412540  743232 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 00:12:05.412548  743232 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 00:12:05.412561  743232 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 00:12:05.412571  743232 command_runner.go:124] > plugin_dirs = [
	I0813 00:12:05.412580  743232 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 00:12:05.412588  743232 command_runner.go:124] > ]
	I0813 00:12:05.412598  743232 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 00:12:05.412606  743232 command_runner.go:124] > [crio.metrics]
	I0813 00:12:05.412617  743232 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 00:12:05.412624  743232 command_runner.go:124] > enable_metrics = false
	I0813 00:12:05.412631  743232 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 00:12:05.412642  743232 command_runner.go:124] > metrics_port = 9090
	I0813 00:12:05.412674  743232 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 00:12:05.412683  743232 command_runner.go:124] > metrics_socket = ""
	I0813 00:12:05.412731  743232 command_runner.go:124] ! time="2021-08-13T00:12:05Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 00:12:05.412752  743232 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 00:12:05.412831  743232 cni.go:93] Creating CNI manager for ""
	I0813 00:12:05.412848  743232 cni.go:154] 1 nodes found, recommending kindnet
	I0813 00:12:05.412864  743232 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:12:05.412881  743232 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813001157-676638 NodeName:multinode-20210813001157-676638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:12:05.413049  743232 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813001157-676638"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:12:05.413169  743232 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-20210813001157-676638 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813001157-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 00:12:05.413259  743232 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 00:12:05.420365  743232 command_runner.go:124] > kubeadm
	I0813 00:12:05.420386  743232 command_runner.go:124] > kubectl
	I0813 00:12:05.420390  743232 command_runner.go:124] > kubelet
	I0813 00:12:05.420407  743232 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 00:12:05.420454  743232 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 00:12:05.427451  743232 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0813 00:12:05.439963  743232 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:12:05.453175  743232 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0813 00:12:05.465955  743232 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 00:12:05.469035  743232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:12:05.478550  743232 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638 for IP: 192.168.49.2
	I0813 00:12:05.478597  743232 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:12:05.478620  743232 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:12:05.478678  743232 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/client.key
	I0813 00:12:05.478688  743232 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/client.crt with IP's: []
	I0813 00:12:05.612754  743232 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/client.crt ...
	I0813 00:12:05.612791  743232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/client.crt: {Name:mk61fc688cb78ce9aedb00454482594e8e950a95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:12:05.613051  743232 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/client.key ...
	I0813 00:12:05.613072  743232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/client.key: {Name:mk6d9e003e1c67b138e4fdad755f9d32f86c427c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:12:05.613190  743232 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.key.dd3b5fb2
	I0813 00:12:05.613204  743232 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 00:12:05.753005  743232 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.crt.dd3b5fb2 ...
	I0813 00:12:05.753058  743232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.crt.dd3b5fb2: {Name:mk4fef4f7f2d078a30d1513a542abd428f8310d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:12:05.753327  743232 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.key.dd3b5fb2 ...
	I0813 00:12:05.753350  743232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.key.dd3b5fb2: {Name:mkaf06fbc98e40669db0da359e44676f0d64a650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:12:05.753462  743232 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.crt
	I0813 00:12:05.753527  743232 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.key
	I0813 00:12:05.753585  743232 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/proxy-client.key
	I0813 00:12:05.753594  743232 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/proxy-client.crt with IP's: []
	I0813 00:12:05.831368  743232 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/proxy-client.crt ...
	I0813 00:12:05.831403  743232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/proxy-client.crt: {Name:mkb529097c6e0177b73e409a7c79fc1b8f33f1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:12:05.831632  743232 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/proxy-client.key ...
	I0813 00:12:05.831651  743232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/proxy-client.key: {Name:mka88baa18bc0b8b23f7ba026c92d5c12bf6d677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:12:05.831766  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0813 00:12:05.831793  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0813 00:12:05.831812  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0813 00:12:05.831828  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0813 00:12:05.831842  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 00:12:05.831856  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 00:12:05.831868  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 00:12:05.831878  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 00:12:05.831932  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem (1338 bytes)
	W0813 00:12:05.831974  743232 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638_empty.pem, impossibly tiny 0 bytes
	I0813 00:12:05.831986  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 00:12:05.832007  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1082 bytes)
	I0813 00:12:05.832034  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:12:05.832054  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
	I0813 00:12:05.832096  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:12:05.832128  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:12:05.832142  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem -> /usr/share/ca-certificates/676638.pem
	I0813 00:12:05.832154  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> /usr/share/ca-certificates/6766382.pem
	I0813 00:12:05.833093  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 00:12:05.889738  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 00:12:05.907963  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 00:12:05.925077  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 00:12:05.941185  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:12:05.957633  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:12:05.974144  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:12:05.990898  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:12:06.007980  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:12:06.025128  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem --> /usr/share/ca-certificates/676638.pem (1338 bytes)
	I0813 00:12:06.042233  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /usr/share/ca-certificates/6766382.pem (1708 bytes)
	I0813 00:12:06.059102  743232 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 00:12:06.071685  743232 ssh_runner.go:149] Run: openssl version
	I0813 00:12:06.076409  743232 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0813 00:12:06.076536  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/676638.pem && ln -fs /usr/share/ca-certificates/676638.pem /etc/ssl/certs/676638.pem"
	I0813 00:12:06.083829  743232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/676638.pem
	I0813 00:12:06.086912  743232 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 13 00:05 /usr/share/ca-certificates/676638.pem
	I0813 00:12:06.086958  743232 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 00:05 /usr/share/ca-certificates/676638.pem
	I0813 00:12:06.087002  743232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/676638.pem
	I0813 00:12:06.091680  743232 command_runner.go:124] > 51391683
	I0813 00:12:06.091985  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/676638.pem /etc/ssl/certs/51391683.0"
	I0813 00:12:06.099859  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6766382.pem && ln -fs /usr/share/ca-certificates/6766382.pem /etc/ssl/certs/6766382.pem"
	I0813 00:12:06.107641  743232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6766382.pem
	I0813 00:12:06.110838  743232 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 13 00:05 /usr/share/ca-certificates/6766382.pem
	I0813 00:12:06.111005  743232 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 00:05 /usr/share/ca-certificates/6766382.pem
	I0813 00:12:06.111068  743232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6766382.pem
	I0813 00:12:06.115772  743232 command_runner.go:124] > 3ec20f2e
	I0813 00:12:06.115963  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6766382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:12:06.123724  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:12:06.131368  743232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:12:06.134593  743232 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 12 23:55 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:12:06.134664  743232 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:55 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:12:06.134721  743232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:12:06.139385  743232 command_runner.go:124] > b5213941
	I0813 00:12:06.139575  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:12:06.147199  743232 kubeadm.go:390] StartCluster: {Name:multinode-20210813001157-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813001157-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 00:12:06.147302  743232 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 00:12:06.147345  743232 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 00:12:06.171589  743232 cri.go:76] found id: ""
	I0813 00:12:06.171666  743232 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 00:12:06.178197  743232 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0813 00:12:06.178238  743232 command_runner.go:124] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0813 00:12:06.178248  743232 command_runner.go:124] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0813 00:12:06.178860  743232 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 00:12:06.185783  743232 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 00:12:06.185836  743232 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 00:12:06.192420  743232 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0813 00:12:06.192447  743232 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0813 00:12:06.192458  743232 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0813 00:12:06.192471  743232 command_runner.go:124] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 00:12:06.192509  743232 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 00:12:06.192543  743232 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 00:12:06.246965  743232 command_runner.go:124] > [init] Using Kubernetes version: v1.21.3
	I0813 00:12:06.247044  743232 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 00:12:06.276043  743232 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0813 00:12:06.276115  743232 command_runner.go:124] > KERNEL_VERSION: 4.9.0-16-amd64
	I0813 00:12:06.276171  743232 command_runner.go:124] > OS: Linux
	I0813 00:12:06.276275  743232 command_runner.go:124] > CGROUPS_CPU: enabled
	I0813 00:12:06.276374  743232 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0813 00:12:06.276447  743232 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0813 00:12:06.276501  743232 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0813 00:12:06.276548  743232 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0813 00:12:06.276590  743232 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0813 00:12:06.276630  743232 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0813 00:12:06.276717  743232 command_runner.go:124] > CGROUPS_HUGETLB: missing
	I0813 00:12:06.350680  743232 command_runner.go:124] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 00:12:06.350816  743232 command_runner.go:124] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 00:12:06.350982  743232 command_runner.go:124] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0813 00:12:06.490355  743232 out.go:204]   - Generating certificates and keys ...
	I0813 00:12:06.486524  743232 command_runner.go:124] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 00:12:06.490531  743232 command_runner.go:124] > [certs] Using existing ca certificate authority
	I0813 00:12:06.490624  743232 command_runner.go:124] > [certs] Using existing apiserver certificate and key on disk
	I0813 00:12:06.585349  743232 command_runner.go:124] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0813 00:12:06.764629  743232 command_runner.go:124] > [certs] Generating "front-proxy-ca" certificate and key
	I0813 00:12:06.986480  743232 command_runner.go:124] > [certs] Generating "front-proxy-client" certificate and key
	I0813 00:12:07.290905  743232 command_runner.go:124] > [certs] Generating "etcd/ca" certificate and key
	I0813 00:12:07.510187  743232 command_runner.go:124] > [certs] Generating "etcd/server" certificate and key
	I0813 00:12:07.510416  743232 command_runner.go:124] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20210813001157-676638] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0813 00:12:07.867622  743232 command_runner.go:124] > [certs] Generating "etcd/peer" certificate and key
	I0813 00:12:07.867755  743232 command_runner.go:124] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20210813001157-676638] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0813 00:12:07.994363  743232 command_runner.go:124] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0813 00:12:08.415642  743232 command_runner.go:124] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0813 00:12:08.552229  743232 command_runner.go:124] > [certs] Generating "sa" key and public key
	I0813 00:12:08.552326  743232 command_runner.go:124] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 00:12:08.842097  743232 command_runner.go:124] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 00:12:09.066986  743232 command_runner.go:124] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 00:12:09.230909  743232 command_runner.go:124] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 00:12:09.321536  743232 command_runner.go:124] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 00:12:09.329071  743232 command_runner.go:124] > [kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5]
	I0813 00:12:09.329207  743232 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 00:12:09.330188  743232 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 00:12:09.330245  743232 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 00:12:09.401109  743232 out.go:204]   - Booting up control plane ...
	I0813 00:12:09.398183  743232 command_runner.go:124] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 00:12:09.401298  743232 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 00:12:09.408271  743232 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 00:12:09.409949  743232 command_runner.go:124] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 00:12:09.410687  743232 command_runner.go:124] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 00:12:09.413176  743232 command_runner.go:124] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0813 00:12:22.915732  743232 command_runner.go:124] > [apiclient] All control plane components are healthy after 13.502444 seconds
	I0813 00:12:22.915919  743232 command_runner.go:124] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 00:12:22.928419  743232 command_runner.go:124] > [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 00:12:23.448140  743232 command_runner.go:124] > [upload-certs] Skipping phase. Please see --upload-certs
	I0813 00:12:23.448471  743232 command_runner.go:124] > [mark-control-plane] Marking the node multinode-20210813001157-676638 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 00:12:23.959701  743232 out.go:204]   - Configuring RBAC rules ...
	I0813 00:12:23.957887  743232 command_runner.go:124] > [bootstrap-token] Using token: 48vapw.in0tu46io8afu0yd
	I0813 00:12:23.959881  743232 command_runner.go:124] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 00:12:23.963747  743232 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 00:12:23.972066  743232 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 00:12:23.975425  743232 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 00:12:23.979500  743232 command_runner.go:124] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 00:12:23.981753  743232 command_runner.go:124] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 00:12:23.990582  743232 command_runner.go:124] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 00:12:24.215044  743232 command_runner.go:124] > [addons] Applied essential addon: CoreDNS
	I0813 00:12:24.367139  743232 command_runner.go:124] > [addons] Applied essential addon: kube-proxy
	I0813 00:12:24.367947  743232 command_runner.go:124] > Your Kubernetes control-plane has initialized successfully!
	I0813 00:12:24.368051  743232 command_runner.go:124] > To start using your cluster, you need to run the following as a regular user:
	I0813 00:12:24.368092  743232 command_runner.go:124] >   mkdir -p $HOME/.kube
	I0813 00:12:24.368205  743232 command_runner.go:124] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 00:12:24.368291  743232 command_runner.go:124] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 00:12:24.368375  743232 command_runner.go:124] > Alternatively, if you are the root user, you can run:
	I0813 00:12:24.368440  743232 command_runner.go:124] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 00:12:24.368515  743232 command_runner.go:124] > You should now deploy a pod network to the cluster.
	I0813 00:12:24.368612  743232 command_runner.go:124] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 00:12:24.368736  743232 command_runner.go:124] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 00:12:24.368857  743232 command_runner.go:124] > You can now join any number of control-plane nodes by copying certificate authorities
	I0813 00:12:24.368967  743232 command_runner.go:124] > and service account keys on each node and then running the following as root:
	I0813 00:12:24.369076  743232 command_runner.go:124] >   kubeadm join control-plane.minikube.internal:8443 --token 48vapw.in0tu46io8afu0yd \
	I0813 00:12:24.369194  743232 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:168e7adac45e0238c7bd00763c6ed6a04340e722951e8dc79c7dd45687f15171 \
	I0813 00:12:24.369347  743232 command_runner.go:124] > 	--control-plane 
	I0813 00:12:24.369537  743232 command_runner.go:124] > Then you can join any number of worker nodes by running the following on each as root:
	I0813 00:12:24.369657  743232 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token 48vapw.in0tu46io8afu0yd \
	I0813 00:12:24.369803  743232 command_runner.go:124] > 	--discovery-token-ca-cert-hash sha256:168e7adac45e0238c7bd00763c6ed6a04340e722951e8dc79c7dd45687f15171 
	I0813 00:12:24.370601  743232 command_runner.go:124] ! 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	I0813 00:12:24.370676  743232 command_runner.go:124] ! 	[WARNING SystemVerification]: missing optional cgroups: hugetlb
	I0813 00:12:24.370954  743232 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
	I0813 00:12:24.371084  743232 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 00:12:24.371121  743232 cni.go:93] Creating CNI manager for ""
	I0813 00:12:24.371128  743232 cni.go:154] 1 nodes found, recommending kindnet
	I0813 00:12:24.373510  743232 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 00:12:24.373576  743232 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 00:12:24.377030  743232 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 00:12:24.377050  743232 command_runner.go:124] >   Size: 2738488   	Blocks: 5352       IO Block: 4096   regular file
	I0813 00:12:24.377062  743232 command_runner.go:124] > Device: 801h/2049d	Inode: 3807833     Links: 1
	I0813 00:12:24.377071  743232 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:12:24.377082  743232 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0813 00:12:24.377090  743232 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0813 00:12:24.377099  743232 command_runner.go:124] > Change: 2021-07-02 14:50:00.997696388 +0000
	I0813 00:12:24.377111  743232 command_runner.go:124] >  Birth: -
	I0813 00:12:24.377176  743232 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 00:12:24.377188  743232 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 00:12:24.390039  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 00:12:24.752096  743232 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0813 00:12:24.755755  743232 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0813 00:12:24.760842  743232 command_runner.go:124] > serviceaccount/kindnet created
	I0813 00:12:24.770830  743232 command_runner.go:124] > daemonset.apps/kindnet created
	I0813 00:12:24.774597  743232 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 00:12:24.774691  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:24.774701  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=multinode-20210813001157-676638 minikube.k8s.io/updated_at=2021_08_13T00_12_24_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:24.893866  743232 command_runner.go:124] > -16
	I0813 00:12:24.893910  743232 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0813 00:12:24.893972  743232 ops.go:34] apiserver oom_adj: -16
	I0813 00:12:24.893991  743232 command_runner.go:124] > node/multinode-20210813001157-676638 labeled
	I0813 00:12:24.893977  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:24.959868  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:25.460712  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:25.526464  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:25.961183  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:26.026432  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:26.461073  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:26.526610  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:26.960082  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:27.031014  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:27.460482  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:27.528051  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:27.961086  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:28.028030  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:28.460607  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:28.542014  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:28.960460  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:29.029244  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:29.460211  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:29.529606  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:29.960139  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:30.025516  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:30.461044  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:30.527910  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:30.960523  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:34.267096  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:35.558243  743232 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (4.597669427s)
	I0813 00:12:35.960342  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:36.030846  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:36.460133  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:36.530022  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:36.960564  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:37.026290  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:37.460230  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:37.527753  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:37.960687  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:38.032131  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:38.460683  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:38.525955  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:38.960745  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:39.030227  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:39.460772  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:39.531258  743232 command_runner.go:124] ! Error from server (NotFound): serviceaccounts "default" not found
	I0813 00:12:39.960810  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:12:40.030348  743232 command_runner.go:124] > NAME      SECRETS   AGE
	I0813 00:12:40.030384  743232 command_runner.go:124] > default   1         1s
	I0813 00:12:40.030405  743232 kubeadm.go:985] duration metric: took 15.255778248s to wait for elevateKubeSystemPrivileges.
	I0813 00:12:40.030419  743232 kubeadm.go:392] StartCluster complete in 33.883229962s
	I0813 00:12:40.030444  743232 settings.go:142] acquiring lock: {Name:mk8e048b414f35bb1583f1d1b3e929d90c1bd9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:12:40.030576  743232 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:12:40.031639  743232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk7dda383efa2f679c68affe6e459fff93248137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:12:40.032219  743232 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:12:40.032467  743232 kapi.go:59] client config for multinode-20210813001157-676638: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001
157-676638/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:12:40.033042  743232 cert_rotation.go:137] Starting client certificate rotation controller
	I0813 00:12:40.034991  743232 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 00:12:40.035012  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:40.035017  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:40.035022  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:40.043295  743232 round_trippers.go:457] Response Status: 200 OK in 8 milliseconds
	I0813 00:12:40.043321  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:40.043326  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:40.043329  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:40.043333  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:40.043336  743232 round_trippers.go:463]     Content-Length: 291
	I0813 00:12:40.043339  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:40 GMT
	I0813 00:12:40.043342  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:40.043365  743232 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6fedb6b-c1c9-4968-a561-b9e44ebdfcf9","resourceVersion":"396","creationTimestamp":"2021-08-13T00:12:24Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 00:12:40.044104  743232 request.go:1123] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6fedb6b-c1c9-4968-a561-b9e44ebdfcf9","resourceVersion":"396","creationTimestamp":"2021-08-13T00:12:24Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 00:12:40.044164  743232 round_trippers.go:432] PUT https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 00:12:40.044174  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:40.044180  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:40.044187  743232 round_trippers.go:442]     Content-Type: application/json
	I0813 00:12:40.044192  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:40.047062  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:40.047084  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:40.047094  743232 round_trippers.go:463]     Content-Length: 291
	I0813 00:12:40.047098  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:40 GMT
	I0813 00:12:40.047102  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:40.047106  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:40.047109  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:40.047114  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:40.047141  743232 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6fedb6b-c1c9-4968-a561-b9e44ebdfcf9","resourceVersion":"413","creationTimestamp":"2021-08-13T00:12:24Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0813 00:12:40.548062  743232 round_trippers.go:432] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0813 00:12:40.548096  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:40.548103  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:40.548109  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:40.550580  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:40.550602  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:40.550607  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:40.550611  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:40.550614  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:40.550617  743232 round_trippers.go:463]     Content-Length: 291
	I0813 00:12:40.550620  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:40 GMT
	I0813 00:12:40.550623  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:40.550647  743232 request.go:1123] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6fedb6b-c1c9-4968-a561-b9e44ebdfcf9","resourceVersion":"450","creationTimestamp":"2021-08-13T00:12:24Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0813 00:12:40.550760  743232 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20210813001157-676638" rescaled to 1
	I0813 00:12:40.550804  743232 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:12:40.553911  743232 out.go:177] * Verifying Kubernetes components...
	I0813 00:12:40.550867  743232 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 00:12:40.553990  743232 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:12:40.550895  743232 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 00:12:40.554082  743232 addons.go:59] Setting storage-provisioner=true in profile "multinode-20210813001157-676638"
	I0813 00:12:40.554101  743232 addons.go:59] Setting default-storageclass=true in profile "multinode-20210813001157-676638"
	I0813 00:12:40.554132  743232 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20210813001157-676638"
	I0813 00:12:40.554106  743232 addons.go:135] Setting addon storage-provisioner=true in "multinode-20210813001157-676638"
	W0813 00:12:40.554220  743232 addons.go:147] addon storage-provisioner should already be in state true
	I0813 00:12:40.554254  743232 host.go:66] Checking if "multinode-20210813001157-676638" exists ...
	I0813 00:12:40.554546  743232 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638 --format={{.State.Status}}
	I0813 00:12:40.554744  743232 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638 --format={{.State.Status}}
	I0813 00:12:40.567856  743232 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:12:40.568226  743232 kapi.go:59] client config for multinode-20210813001157-676638: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001
157-676638/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:12:40.570956  743232 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813001157-676638" to be "Ready" ...
	I0813 00:12:40.571083  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:40.571093  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:40.571108  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:40.571119  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:40.574433  743232 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:12:40.574454  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:40.574460  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:40.574465  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:40.574469  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:40.574473  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:40 GMT
	I0813 00:12:40.574477  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:40.574711  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:40.575771  743232 node_ready.go:49] node "multinode-20210813001157-676638" has status "Ready":"True"
	I0813 00:12:40.575788  743232 node_ready.go:38] duration metric: took 4.798919ms waiting for node "multinode-20210813001157-676638" to be "Ready" ...
	I0813 00:12:40.575800  743232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:12:40.575875  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 00:12:40.575887  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:40.575894  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:40.575900  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:40.583412  743232 round_trippers.go:457] Response Status: 200 OK in 7 milliseconds
	I0813 00:12:40.583437  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:40.583445  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:40.583453  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:40.583458  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:40.583463  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:40.583467  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:40 GMT
	I0813 00:12:40.583982  743232 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 56285 chars]
	I0813 00:12:40.592546  743232 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace to be "Ready" ...
	I0813 00:12:40.592732  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:40.592776  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:40.592790  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:40.592796  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:40.595604  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:40.595628  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:40.595634  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:40.595640  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:40.595644  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:40.595648  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:40.595652  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:40 GMT
	I0813 00:12:40.595783  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:40.599875  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:40.599897  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:40.599903  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:40.599907  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:40.602160  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:40.602178  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:40.602184  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:40.602188  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:40 GMT
	I0813 00:12:40.602193  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:40.602196  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:40.602215  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:40.602460  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:40.607677  743232 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:12:40.607998  743232 kapi.go:59] client config for multinode-20210813001157-676638: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001
157-676638/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:12:40.609665  743232 round_trippers.go:432] GET https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0813 00:12:40.609685  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:40.609692  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:40.609697  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:40.612828  743232 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 00:12:40.611781  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:40.612935  743232 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:12:40.612945  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:40.612949  743232 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 00:12:40.612958  743232 round_trippers.go:463]     Content-Length: 109
	I0813 00:12:40.612965  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:40 GMT
	I0813 00:12:40.612971  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:40.612975  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:40.612980  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:40.612992  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:40.613022  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:12:40.613022  743232 request.go:1123] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"451"},"items":[]}
	I0813 00:12:40.613838  743232 addons.go:135] Setting addon default-storageclass=true in "multinode-20210813001157-676638"
	W0813 00:12:40.613859  743232 addons.go:147] addon default-storageclass should already be in state true
	I0813 00:12:40.613884  743232 host.go:66] Checking if "multinode-20210813001157-676638" exists ...
	I0813 00:12:40.614252  743232 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638 --format={{.State.Status}}
	I0813 00:12:40.641287  743232 command_runner.go:124] > apiVersion: v1
	I0813 00:12:40.641315  743232 command_runner.go:124] > data:
	I0813 00:12:40.641322  743232 command_runner.go:124] >   Corefile: |
	I0813 00:12:40.641328  743232 command_runner.go:124] >     .:53 {
	I0813 00:12:40.641333  743232 command_runner.go:124] >         errors
	I0813 00:12:40.641341  743232 command_runner.go:124] >         health {
	I0813 00:12:40.641348  743232 command_runner.go:124] >            lameduck 5s
	I0813 00:12:40.641353  743232 command_runner.go:124] >         }
	I0813 00:12:40.641358  743232 command_runner.go:124] >         ready
	I0813 00:12:40.641371  743232 command_runner.go:124] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0813 00:12:40.641378  743232 command_runner.go:124] >            pods insecure
	I0813 00:12:40.641387  743232 command_runner.go:124] >            fallthrough in-addr.arpa ip6.arpa
	I0813 00:12:40.641399  743232 command_runner.go:124] >            ttl 30
	I0813 00:12:40.641407  743232 command_runner.go:124] >         }
	I0813 00:12:40.641413  743232 command_runner.go:124] >         prometheus :9153
	I0813 00:12:40.641423  743232 command_runner.go:124] >         forward . /etc/resolv.conf {
	I0813 00:12:40.641429  743232 command_runner.go:124] >            max_concurrent 1000
	I0813 00:12:40.641438  743232 command_runner.go:124] >         }
	I0813 00:12:40.641447  743232 command_runner.go:124] >         cache 30
	I0813 00:12:40.641456  743232 command_runner.go:124] >         loop
	I0813 00:12:40.641462  743232 command_runner.go:124] >         reload
	I0813 00:12:40.641471  743232 command_runner.go:124] >         loadbalance
	I0813 00:12:40.641476  743232 command_runner.go:124] >     }
	I0813 00:12:40.641481  743232 command_runner.go:124] > kind: ConfigMap
	I0813 00:12:40.641485  743232 command_runner.go:124] > metadata:
	I0813 00:12:40.641494  743232 command_runner.go:124] >   creationTimestamp: "2021-08-13T00:12:24Z"
	I0813 00:12:40.641500  743232 command_runner.go:124] >   name: coredns
	I0813 00:12:40.641505  743232 command_runner.go:124] >   namespace: kube-system
	I0813 00:12:40.641511  743232 command_runner.go:124] >   resourceVersion: "250"
	I0813 00:12:40.641517  743232 command_runner.go:124] >   uid: b1792d24-b809-4fbc-9509-3b1aeda88a3f
	I0813 00:12:40.641646  743232 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 00:12:40.660333  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa Username:docker}
	I0813 00:12:40.663838  743232 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 00:12:40.663868  743232 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 00:12:40.663930  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:12:40.710544  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa Username:docker}
	I0813 00:12:40.807851  743232 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:12:40.907295  743232 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 00:12:41.103725  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:41.103756  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:41.103764  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:41.103769  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:41.107636  743232 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:12:41.107663  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:41.107670  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:41.107676  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:41.107685  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:41.107700  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:41.107708  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:41 GMT
	I0813 00:12:41.107850  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:41.108380  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:41.108407  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:41.108415  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:41.108420  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:41.109091  743232 command_runner.go:124] > configmap/coredns replaced
	I0813 00:12:41.114940  743232 round_trippers.go:457] Response Status: 200 OK in 6 milliseconds
	I0813 00:12:41.114964  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:41.114973  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:41.114980  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:41.114985  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:41 GMT
	I0813 00:12:41.114990  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:41.114996  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:41.115126  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:41.115727  743232 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 00:12:41.603966  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:41.603995  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:41.604005  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:41.604015  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:41.606403  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:41.606437  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:41.606446  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:41.606452  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:41.606458  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:41 GMT
	I0813 00:12:41.606463  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:41.606470  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:41.606641  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:41.607206  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:41.607226  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:41.607235  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:41.607242  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:41.610906  743232 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:12:41.610927  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:41.610934  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:41.610939  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:41.610944  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:41.610949  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:41.610953  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:41 GMT
	I0813 00:12:41.611113  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:41.613953  743232 command_runner.go:124] > serviceaccount/storage-provisioner created
	I0813 00:12:41.613992  743232 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0813 00:12:41.614005  743232 command_runner.go:124] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 00:12:41.614019  743232 command_runner.go:124] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0813 00:12:41.614033  743232 command_runner.go:124] > endpoints/k8s.io-minikube-hostpath created
	I0813 00:12:41.614045  743232 command_runner.go:124] > pod/storage-provisioner created
	I0813 00:12:41.614096  743232 command_runner.go:124] > storageclass.storage.k8s.io/standard created
	I0813 00:12:41.616360  743232 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 00:12:41.616383  743232 addons.go:344] enableAddons completed in 1.065500152s
	I0813 00:12:42.103589  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:42.103617  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:42.103623  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:42.103627  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:42.106465  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:42.106492  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:42.106500  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:42.106505  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:42.106509  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:42.106513  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:42.106518  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:42 GMT
	I0813 00:12:42.106686  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:42.107101  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:42.107118  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:42.107125  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:42.107130  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:42.108809  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:42.108825  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:42.108829  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:42 GMT
	I0813 00:12:42.108833  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:42.108836  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:42.108838  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:42.108842  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:42.108960  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:42.603579  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:42.603606  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:42.603612  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:42.603617  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:42.605870  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:42.605898  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:42.605905  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:42.605910  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:42 GMT
	I0813 00:12:42.605914  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:42.605919  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:42.605923  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:42.606041  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:42.606433  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:42.606449  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:42.606455  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:42.606458  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:42.608337  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:42.608360  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:42.608370  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:42.608374  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:42.608379  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:42.608383  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:42.608389  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:42 GMT
	I0813 00:12:42.608482  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:42.608727  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:12:43.103217  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:43.103245  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:43.103251  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:43.103255  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:43.106074  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:43.106104  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:43.106118  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:43.106129  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:43.106135  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:43.106140  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:43.106145  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:43 GMT
	I0813 00:12:43.106275  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:43.106664  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:43.106687  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:43.106695  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:43.106702  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:43.108334  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:43.108356  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:43.108363  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:43.108370  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:43.108376  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:43.108381  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:43.108386  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:43 GMT
	I0813 00:12:43.108481  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:43.603670  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:43.603706  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:43.603715  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:43.603723  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:43.607750  743232 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:12:43.607784  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:43.607792  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:43.607797  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:43.607802  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:43.607806  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:43 GMT
	I0813 00:12:43.607810  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:43.607957  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:43.608339  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:43.608353  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:43.608358  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:43.608362  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:43.610285  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:43.610309  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:43.610316  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:43.610321  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:43.610332  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:43.610336  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:43.610341  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:43 GMT
	I0813 00:12:43.610483  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:44.103072  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:44.103106  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:44.103114  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:44.103120  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:44.105588  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:44.105610  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:44.105617  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:44 GMT
	I0813 00:12:44.105622  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:44.105626  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:44.105631  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:44.105635  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:44.105751  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:44.106216  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:44.106232  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:44.106238  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:44.106242  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:44.108249  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:44.108272  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:44.108279  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:44.108284  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:44 GMT
	I0813 00:12:44.108289  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:44.108293  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:44.108298  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:44.108462  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:44.603033  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:44.603067  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:44.603076  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:44.603082  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:44.605422  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:44.605447  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:44.605453  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:44.605458  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:44 GMT
	I0813 00:12:44.605462  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:44.605466  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:44.605470  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:44.605611  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:44.606017  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:44.606035  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:44.606041  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:44.606045  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:44.608071  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:44.608093  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:44.608098  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:44.608102  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:44.608105  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:44.608109  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:44.608112  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:44 GMT
	I0813 00:12:44.608190  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:45.103840  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:45.103879  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:45.103887  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:45.103892  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:45.106673  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:45.106705  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:45.106712  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:45.106722  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:45.106726  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:45.106731  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:45.106735  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:45 GMT
	I0813 00:12:45.106904  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:45.107249  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:45.107261  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:45.107266  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:45.107274  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:45.109052  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:45.109074  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:45.109084  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:45.109090  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:45 GMT
	I0813 00:12:45.109094  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:45.109099  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:45.109103  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:45.109278  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:45.109537  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:12:45.603906  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:45.603934  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:45.603942  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:45.603948  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:45.606466  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:45.606486  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:45.606491  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:45.606495  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:45.606498  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:45.606501  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:45.606504  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:45 GMT
	I0813 00:12:45.606595  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:45.606963  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:45.606976  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:45.606981  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:45.606985  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:45.608693  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:45.608712  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:45.608719  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:45.608723  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:45.608728  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:45.608733  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:45.608738  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:45 GMT
	I0813 00:12:45.608835  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:46.103431  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:46.103460  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:46.103466  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:46.103470  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:46.105963  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:46.105984  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:46.105991  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:46.105995  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:46.105999  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:46.106003  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:46.106011  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:46 GMT
	I0813 00:12:46.106101  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:46.106459  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:46.106507  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:46.106538  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:46.106545  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:46.108442  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:46.108464  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:46.108471  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:46.108476  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:46.108481  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:46.108485  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:46.108489  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:46 GMT
	I0813 00:12:46.108583  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:46.603183  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:46.603212  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:46.603218  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:46.603222  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:46.605687  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:46.605708  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:46.605713  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:46.605716  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:46.605720  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:46 GMT
	I0813 00:12:46.605723  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:46.605726  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:46.605873  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:46.606199  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:46.606211  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:46.606215  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:46.606219  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:46.607947  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:46.607971  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:46.607979  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:46.607984  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:46.607989  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:46.607993  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:46.608004  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:46 GMT
	I0813 00:12:46.608154  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:47.103814  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:47.103844  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:47.103850  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:47.103854  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:47.106400  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:47.106426  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:47.106432  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:47.106437  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:47.106442  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:47.106447  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:47.106451  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:47 GMT
	I0813 00:12:47.106552  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:47.106968  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:47.106986  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:47.106992  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:47.106998  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:47.108831  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:47.108853  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:47.108860  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:47.108866  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:47 GMT
	I0813 00:12:47.108870  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:47.108879  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:47.108883  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:47.108964  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:47.603571  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:47.603598  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:47.603606  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:47.603612  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:47.606377  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:47.606402  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:47.606408  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:47.606412  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:47.606415  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:47.606419  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:47.606422  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:47 GMT
	I0813 00:12:47.606552  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:47.606945  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:47.606961  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:47.606968  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:47.606974  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:47.608732  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:47.608758  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:47.608763  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:47.608767  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:47.608770  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:47.608772  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:47.608775  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:47 GMT
	I0813 00:12:47.608859  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:47.609093  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:12:48.103889  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:48.103926  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:48.103935  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:48.103941  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:48.106629  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:48.106655  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:48.106663  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:48 GMT
	I0813 00:12:48.106667  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:48.106670  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:48.106676  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:48.106680  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:48.106797  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:48.107152  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:48.107165  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:48.107170  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:48.107174  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:48.109061  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:48.109084  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:48.109091  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:48.109097  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:48.109102  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:48.109106  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:48.109111  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:48 GMT
	I0813 00:12:48.109283  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:48.603856  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:48.603884  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:48.603890  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:48.603894  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:48.606191  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:48.606216  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:48.606222  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:48 GMT
	I0813 00:12:48.606225  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:48.606228  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:48.606231  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:48.606234  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:48.606428  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:48.606862  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:48.606880  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:48.606888  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:48.606892  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:48.608783  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:48.608801  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:48.608808  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:48.608812  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:48.608817  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:48.608821  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:48.608825  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:48 GMT
	I0813 00:12:48.608905  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:49.103505  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:49.103529  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:49.103534  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:49.103539  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:49.105915  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:49.105936  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:49.105941  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:49.105944  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:49.105948  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:49.105950  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:49.105954  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:49 GMT
	I0813 00:12:49.106069  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:49.106467  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:49.106484  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:49.106491  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:49.106496  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:49.108142  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:49.108160  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:49.108165  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:49.108169  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:49.108172  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:49.108175  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:49.108179  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:49 GMT
	I0813 00:12:49.108331  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:49.603016  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:49.603041  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:49.603047  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:49.603051  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:49.605677  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:49.605700  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:49.605707  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:49.605712  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:49.605716  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:49 GMT
	I0813 00:12:49.605720  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:49.605726  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:49.605849  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:49.606280  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:49.606296  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:49.606304  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:49.606310  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:49.608134  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:49.608165  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:49.608172  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:49.608177  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:49.608181  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:49.608188  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:49.608192  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:49 GMT
	I0813 00:12:49.608297  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:50.103047  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:50.103078  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:50.103084  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:50.103088  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:50.105595  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:50.105620  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:50.105628  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:50.105633  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:50 GMT
	I0813 00:12:50.105638  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:50.105643  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:50.105659  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:50.105778  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:50.106177  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:50.106196  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:50.106205  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:50.106210  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:50.108128  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:50.108149  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:50.108156  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:50.108160  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:50.108165  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:50.108171  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:50.108177  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:50 GMT
	I0813 00:12:50.108262  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:50.108530  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:12:50.603865  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:50.603892  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:50.603900  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:50.603906  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:50.605733  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:50.605776  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:50.605784  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:50.605788  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:50.605795  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:50.605800  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:50 GMT
	I0813 00:12:50.605805  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:50.605944  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:50.606313  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:50.606327  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:50.606332  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:50.606337  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:50.608009  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:50.608031  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:50.608037  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:50.608042  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:50.608046  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:50.608050  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:50.608055  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:50 GMT
	I0813 00:12:50.608153  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:51.103848  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:51.103880  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:51.103886  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:51.103891  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:51.106929  743232 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:12:51.106964  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:51.106973  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:51.106979  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:51.106985  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:51 GMT
	I0813 00:12:51.106995  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:51.107001  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:51.107166  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:51.107609  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:51.107631  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:51.107637  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:51.107641  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:51.109718  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:51.109735  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:51.109740  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:51.109743  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:51.109746  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:51.109749  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:51.109752  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:51 GMT
	I0813 00:12:51.109854  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:51.603323  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:51.603353  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:51.603361  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:51.603367  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:51.605931  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:51.605960  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:51.605969  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:51.605974  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:51 GMT
	I0813 00:12:51.605979  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:51.605985  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:51.605989  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:51.606118  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:51.606503  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:51.606519  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:51.606524  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:51.606530  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:51.608562  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:51.608582  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:51.608588  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:51.608593  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:51.608600  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:51.608605  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:51.608610  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:51 GMT
	I0813 00:12:51.608814  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:52.103394  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:52.103431  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:52.103441  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:52.103446  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:52.106317  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:52.106346  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:52.106353  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:52.106358  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:52.106362  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:52.106367  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:52.106370  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:52 GMT
	I0813 00:12:52.106506  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:52.106936  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:52.106955  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:52.106962  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:52.106968  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:52.108899  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:52.108920  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:52.108927  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:52.108935  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:52.108939  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:52.108943  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:52.108947  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:52 GMT
	I0813 00:12:52.109030  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:52.109311  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:12:52.603687  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:52.603713  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:52.603719  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:52.603723  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:52.605871  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:52.605895  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:52.605902  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:52 GMT
	I0813 00:12:52.605907  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:52.605913  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:52.605918  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:52.605923  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:52.606011  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:52.606447  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:52.606465  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:52.606472  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:52.606478  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:52.608613  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:52.608631  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:52.608636  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:52.608641  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:52.608671  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:52.608680  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:52.608685  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:52 GMT
	I0813 00:12:52.608794  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:53.103686  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:53.103717  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:53.103726  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:53.103730  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:53.106461  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:53.106490  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:53.106498  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:53.106502  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:53.106505  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:53.106508  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:53.106512  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:53 GMT
	I0813 00:12:53.106632  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:53.107013  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:53.107028  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:53.107034  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:53.107039  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:53.109011  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:53.109031  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:53.109038  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:53 GMT
	I0813 00:12:53.109043  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:53.109047  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:53.109051  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:53.109056  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:53.109165  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:53.603772  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:53.603797  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:53.603803  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:53.603807  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:53.606542  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:53.606569  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:53.606576  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:53.606582  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:53.606587  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:53.606593  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:53 GMT
	I0813 00:12:53.606598  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:53.606754  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:53.607146  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:53.607160  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:53.607165  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:53.607169  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:53.608912  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:53.608928  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:53.608933  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:53 GMT
	I0813 00:12:53.608941  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:53.608962  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:53.608967  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:53.608971  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:53.609063  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:54.103753  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:54.103784  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:54.103795  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:54.103799  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:54.106591  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:54.106614  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:54.106626  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:54.106629  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:54.106633  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:54 GMT
	I0813 00:12:54.106637  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:54.106640  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:54.106728  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:54.107113  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:54.107193  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:54.107198  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:54.107203  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:54.109286  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:54.109308  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:54.109314  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:54.109318  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:54.109321  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:54.109328  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:54 GMT
	I0813 00:12:54.109331  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:54.109460  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:54.109723  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:12:54.602998  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:54.603033  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:54.603041  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:54.603046  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:54.605681  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:54.605705  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:54.605712  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:54.605717  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:54.605721  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:54.605726  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:54.605730  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:54 GMT
	I0813 00:12:54.605861  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:54.606225  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:54.606240  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:54.606247  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:54.606253  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:54.608046  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:54.608069  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:54.608076  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:54.608080  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:54.608088  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:54.608093  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:54.608098  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:54 GMT
	I0813 00:12:54.608211  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:55.103857  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:55.103885  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:55.103891  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:55.103895  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:55.106523  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:55.106547  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:55.106553  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:55.106556  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:55.106560  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:55 GMT
	I0813 00:12:55.106563  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:55.106566  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:55.106656  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:55.107025  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:55.107044  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:55.107050  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:55.107054  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:55.108829  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:55.108848  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:55.108854  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:55.108858  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:55 GMT
	I0813 00:12:55.108861  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:55.108865  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:55.108870  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:55.108966  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:55.603644  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:55.603668  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:55.603673  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:55.603680  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:55.606629  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:55.606650  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:55.606654  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:55.606658  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:55.606661  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:55.606664  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:55 GMT
	I0813 00:12:55.606667  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:55.606769  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:55.607165  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:55.607182  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:55.607189  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:55.607195  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:55.609130  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:55.609150  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:55.609156  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:55.609160  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:55.609164  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:55.609168  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:55.609173  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:55 GMT
	I0813 00:12:55.609301  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:56.103912  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:56.103941  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:56.103947  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:56.103952  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:56.106597  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:56.106621  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:56.106628  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:56.106632  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:56.106636  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:56.106640  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:56.106644  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:56 GMT
	I0813 00:12:56.106738  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:56.107130  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:56.107152  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:56.107157  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:56.107162  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:56.109055  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:56.109080  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:56.109087  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:56.109091  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:56.109096  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:56.109099  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:56 GMT
	I0813 00:12:56.109104  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:56.109199  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:56.603438  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:56.603466  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:56.603472  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:56.603477  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:56.606108  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:56.606128  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:56.606134  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:56.606137  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:56.606140  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:56.606144  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:56 GMT
	I0813 00:12:56.606146  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:56.606243  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:56.606609  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:56.606623  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:56.606628  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:56.606632  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:56.608477  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:56.608498  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:56.608505  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:56.608510  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:56.608515  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:56 GMT
	I0813 00:12:56.608520  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:56.608524  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:56.608631  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:56.608891  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:12:57.103182  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:57.103207  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:57.103213  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:57.103219  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:57.105881  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:57.105904  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:57.105911  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:57.105914  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:57.105917  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:57.105920  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:57.105924  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:57 GMT
	I0813 00:12:57.106078  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:57.106459  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:57.106474  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:57.106479  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:57.106483  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:57.108263  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:57.108279  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:57.108286  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:57.108290  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:57 GMT
	I0813 00:12:57.108294  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:57.108299  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:57.108303  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:57.108410  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:57.602976  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:57.603008  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:57.603016  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:57.603021  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:57.605745  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:57.605774  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:57.605781  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:57.605787  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:57.605792  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:57.605797  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:57.605802  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:57 GMT
	I0813 00:12:57.605910  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:57.606262  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:57.606276  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:57.606281  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:57.606285  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:57.607964  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:57.607979  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:57.607984  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:57.607989  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:57.607993  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:57.607996  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:57 GMT
	I0813 00:12:57.607999  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:57.608137  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:58.103174  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:58.103205  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:58.103211  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:58.103214  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:58.105761  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:58.105788  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:58.105795  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:58 GMT
	I0813 00:12:58.105802  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:58.105807  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:58.105815  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:58.105820  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:58.105950  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:58.106465  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:58.106493  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:58.106501  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:58.106511  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:58.108313  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:58.108335  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:58.108341  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:58.108346  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:58.108350  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:58.108354  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:58.108360  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:58 GMT
	I0813 00:12:58.108459  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:58.603066  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:58.603097  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:58.603106  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:58.603111  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:58.605757  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:58.605780  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:58.605786  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:58.605789  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:58.605792  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:58.605795  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:58.605798  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:58 GMT
	I0813 00:12:58.605916  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:58.606251  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:58.606265  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:58.606270  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:58.606275  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:58.607926  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:58.607941  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:58.607946  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:58.607951  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:58.607954  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:58 GMT
	I0813 00:12:58.607957  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:58.607960  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:58.608044  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:59.103659  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:59.103688  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:59.103697  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:59.103703  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:59.106322  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:59.106345  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:59.106350  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:59.106354  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:59.106357  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:59.106360  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:59.106363  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:59 GMT
	I0813 00:12:59.106465  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:59.106841  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:59.106859  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:59.106864  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:59.106868  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:59.108822  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:12:59.108843  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:59.108850  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:59.108855  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:59.108859  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:59.108863  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:59.108867  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:59 GMT
	I0813 00:12:59.108953  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:12:59.109219  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:12:59.603066  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:12:59.603096  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:59.603104  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:59.603111  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:59.607485  743232 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:12:59.607523  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:59.607531  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:59.607535  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:59 GMT
	I0813 00:12:59.607539  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:59.607549  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:59.607554  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:59.607967  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:12:59.608346  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:12:59.608362  743232 round_trippers.go:438] Request Headers:
	I0813 00:12:59.608370  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:12:59.608376  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:12:59.610556  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:12:59.610573  743232 round_trippers.go:460] Response Headers:
	I0813 00:12:59.610578  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:12:59.610581  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:12:59.610584  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:12:59.610592  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:12:59.610595  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:12:59 GMT
	I0813 00:12:59.610751  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:00.103567  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:00.103609  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:00.103618  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:00.103624  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:00.106154  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:00.106179  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:00.106184  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:00.106188  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:00.106191  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:00.106194  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:00 GMT
	I0813 00:13:00.106197  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:00.106340  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:00.106720  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:00.106737  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:00.106752  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:00.106756  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:00.108945  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:00.108970  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:00.108977  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:00.108982  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:00.108986  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:00.108991  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:00.108995  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:00 GMT
	I0813 00:13:00.109083  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:00.603304  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:00.603331  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:00.603337  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:00.603341  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:00.605855  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:00.605880  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:00.605887  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:00 GMT
	I0813 00:13:00.605892  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:00.605897  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:00.605902  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:00.605907  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:00.606076  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:00.606440  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:00.606456  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:00.606461  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:00.606465  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:00.608216  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:00.608236  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:00.608243  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:00.608247  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:00 GMT
	I0813 00:13:00.608252  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:00.608257  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:00.608262  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:00.608379  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:01.103018  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:01.103047  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:01.103053  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:01.103057  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:01.105910  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:01.105939  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:01.105946  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:01 GMT
	I0813 00:13:01.105950  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:01.105955  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:01.105960  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:01.105965  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:01.106099  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:01.106464  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:01.106479  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:01.106484  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:01.106488  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:01.108436  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:01.108455  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:01.108462  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:01.108467  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:01.108470  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:01.108473  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:01.108476  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:01 GMT
	I0813 00:13:01.108601  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:01.603189  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:01.603221  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:01.603230  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:01.603244  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:01.620371  743232 round_trippers.go:457] Response Status: 200 OK in 17 milliseconds
	I0813 00:13:01.620397  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:01.620403  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:01.620407  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:01 GMT
	I0813 00:13:01.620410  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:01.620413  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:01.620415  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:01.620512  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:01.620891  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:01.620904  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:01.620910  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:01.620914  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:01.622930  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:01.622956  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:01.622963  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:01.622969  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:01 GMT
	I0813 00:13:01.622973  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:01.622978  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:01.622982  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:01.623115  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:01.623392  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:02.103712  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:02.103743  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:02.103750  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:02.103756  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:02.106519  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:02.106547  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:02.106555  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:02 GMT
	I0813 00:13:02.106560  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:02.106565  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:02.106570  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:02.106575  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:02.106691  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:02.107017  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:02.107029  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:02.107034  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:02.107040  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:02.108829  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:02.108848  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:02.108853  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:02.108856  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:02.108859  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:02.108862  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:02.108865  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:02 GMT
	I0813 00:13:02.108941  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:02.603565  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:02.603594  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:02.603600  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:02.603604  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:02.606172  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:02.606202  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:02.606208  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:02.606212  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:02.606216  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:02.606219  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:02.606222  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:02 GMT
	I0813 00:13:02.606323  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:02.606676  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:02.606689  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:02.606694  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:02.606698  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:02.608610  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:02.608630  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:02.608634  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:02.608638  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:02.608641  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:02.608644  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:02.608647  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:02 GMT
	I0813 00:13:02.608735  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:03.103602  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:03.103633  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:03.103639  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:03.103643  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:03.106296  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:03.106324  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:03.106330  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:03 GMT
	I0813 00:13:03.106334  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:03.106338  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:03.106341  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:03.106345  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:03.106476  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:03.106893  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:03.106910  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:03.106918  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:03.106924  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:03.108844  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:03.108866  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:03.108871  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:03.108875  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:03 GMT
	I0813 00:13:03.108879  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:03.108885  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:03.108897  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:03.109013  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:03.603670  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:03.603702  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:03.603719  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:03.603723  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:03.606514  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:03.606538  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:03.606543  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:03.606547  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:03 GMT
	I0813 00:13:03.606550  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:03.606554  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:03.606557  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:03.606656  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:03.607022  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:03.607038  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:03.607043  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:03.607047  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:03.608950  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:03.608972  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:03.608985  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:03.608990  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:03.608994  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:03.608999  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:03.609004  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:03 GMT
	I0813 00:13:03.609143  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:04.103813  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:04.103841  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:04.103847  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:04.103851  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:04.106362  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:04.106383  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:04.106388  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:04.106392  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:04.106395  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:04 GMT
	I0813 00:13:04.106398  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:04.106401  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:04.106506  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:04.106850  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:04.106861  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:04.106866  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:04.106870  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:04.108610  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:04.108631  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:04.108637  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:04.108642  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:04.108647  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:04.108651  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:04.108655  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:04 GMT
	I0813 00:13:04.108835  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:04.109106  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:04.603284  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:04.603314  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:04.603321  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:04.603325  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:04.606004  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:04.606033  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:04.606040  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:04.606043  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:04.606047  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:04.606051  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:04.606054  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:04 GMT
	I0813 00:13:04.606171  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:04.606509  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:04.606522  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:04.606528  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:04.606532  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:04.608419  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:04.608439  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:04.608446  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:04.608451  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:04.608455  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:04 GMT
	I0813 00:13:04.608459  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:04.608464  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:04.608654  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:05.103226  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:05.103256  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:05.103263  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:05.103267  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:05.105873  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:05.105903  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:05.105912  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:05 GMT
	I0813 00:13:05.105918  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:05.105923  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:05.105928  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:05.105938  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:05.106050  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:05.106437  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:05.106452  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:05.106457  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:05.106461  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:05.108316  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:05.108333  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:05.108339  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:05.108344  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:05 GMT
	I0813 00:13:05.108348  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:05.108351  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:05.108356  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:05.108445  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:05.603049  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:05.603076  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:05.603082  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:05.603090  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:05.605662  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:05.605685  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:05.605690  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:05 GMT
	I0813 00:13:05.605694  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:05.605697  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:05.605701  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:05.605704  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:05.605822  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:05.606153  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:05.606165  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:05.606170  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:05.606174  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:05.607986  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:05.608006  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:05.608011  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:05.608015  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:05.608019  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:05 GMT
	I0813 00:13:05.608022  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:05.608027  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:05.608179  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:06.103224  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:06.103253  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:06.103259  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:06.103264  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:06.106090  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:06.106114  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:06.106120  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:06.106123  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:06.106126  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:06.106129  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:06.106133  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:06 GMT
	I0813 00:13:06.106265  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:06.106656  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:06.106677  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:06.106685  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:06.106691  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:06.108783  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:06.108804  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:06.108810  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:06.108814  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:06 GMT
	I0813 00:13:06.108817  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:06.108820  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:06.108825  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:06.108937  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:06.109324  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:06.603456  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:06.603482  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:06.603491  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:06.603496  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:06.606498  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:06.606528  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:06.606535  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:06.606540  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:06.606545  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:06.606549  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:06.606555  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:06 GMT
	I0813 00:13:06.606694  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:06.607077  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:06.607094  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:06.607101  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:06.607107  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:06.609472  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:06.609510  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:06.609519  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:06.609524  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:06.609528  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:06.609533  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:06.609538  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:06 GMT
	I0813 00:13:06.609716  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:07.103284  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:07.103316  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:07.103323  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:07.103327  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:07.106122  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:07.106148  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:07.106159  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:07.106164  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:07.106167  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:07.106172  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:07 GMT
	I0813 00:13:07.106176  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:07.106280  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:07.106647  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:07.106663  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:07.106670  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:07.106675  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:07.108782  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:07.108805  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:07.108810  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:07.108814  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:07.108817  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:07.108820  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:07.108823  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:07 GMT
	I0813 00:13:07.108976  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:07.603697  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:07.603737  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:07.603745  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:07.603752  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:07.606516  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:07.606545  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:07.606553  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:07.606557  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:07.606560  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:07 GMT
	I0813 00:13:07.606563  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:07.606565  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:07.606654  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:07.607012  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:07.607030  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:07.607035  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:07.607040  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:07.609033  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:07.609052  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:07.609057  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:07.609061  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:07.609064  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:07.609067  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:07.609070  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:07 GMT
	I0813 00:13:07.609161  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:08.103120  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:08.103164  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:08.103171  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:08.103175  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:08.105847  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:08.105876  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:08.105882  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:08.105886  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:08.105893  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:08.105897  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:08.105900  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:08 GMT
	I0813 00:13:08.106008  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:08.106392  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:08.106409  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:08.106414  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:08.106418  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:08.108447  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:08.108471  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:08.108480  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:08.108485  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:08.108489  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:08.108494  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:08.108501  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:08 GMT
	I0813 00:13:08.108601  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:08.603180  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:08.603213  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:08.603219  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:08.603223  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:08.605897  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:08.605921  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:08.605928  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:08.605932  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:08.605939  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:08.605943  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:08.605947  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:08 GMT
	I0813 00:13:08.606055  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:08.606419  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:08.606434  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:08.606441  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:08.606446  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:08.608295  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:08.608328  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:08.608335  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:08.608340  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:08.608345  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:08 GMT
	I0813 00:13:08.608349  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:08.608352  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:08.608452  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:08.608752  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:09.103034  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:09.103063  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:09.103071  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:09.103076  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:09.105828  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:09.105852  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:09.105860  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:09.105864  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:09.105869  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:09.105874  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:09.105879  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:09 GMT
	I0813 00:13:09.106038  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:09.106401  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:09.106417  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:09.106424  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:09.106429  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:09.108331  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:09.108355  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:09.108373  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:09.108382  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:09.108387  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:09 GMT
	I0813 00:13:09.108395  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:09.108408  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:09.108516  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:09.603776  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:09.603802  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:09.603810  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:09.603816  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:09.606475  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:09.606499  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:09.606505  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:09.606509  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:09.606512  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:09.606515  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:09.606518  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:09 GMT
	I0813 00:13:09.606621  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:09.606963  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:09.606976  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:09.606981  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:09.606985  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:09.608861  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:09.608878  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:09.608883  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:09.608887  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:09.608890  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:09.608893  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:09.608897  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:09 GMT
	I0813 00:13:09.609033  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:10.103661  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:10.103725  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:10.103731  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:10.103736  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:10.106262  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:10.106283  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:10.106288  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:10.106291  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:10.106294  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:10 GMT
	I0813 00:13:10.106297  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:10.106300  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:10.106475  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:10.106934  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:10.106951  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:10.106956  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:10.106960  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:10.111863  743232 round_trippers.go:457] Response Status: 200 OK in 4 milliseconds
	I0813 00:13:10.111890  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:10.111898  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:10.111903  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:10.111908  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:10.111914  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:10.111919  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:10 GMT
	I0813 00:13:10.112051  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:10.603640  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:10.603667  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:10.603673  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:10.603681  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:10.606373  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:10.606400  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:10.606406  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:10.606410  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:10.606413  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:10.606417  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:10.606426  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:10 GMT
	I0813 00:13:10.606561  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:10.606963  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:10.606981  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:10.606987  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:10.606992  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:10.608979  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:10.608999  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:10.609005  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:10.609008  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:10.609011  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:10.609015  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:10.609018  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:10 GMT
	I0813 00:13:10.609139  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:10.609452  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:11.103776  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:11.103805  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:11.103811  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:11.103815  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:11.106353  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:11.106373  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:11.106379  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:11.106382  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:11.106386  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:11.106390  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:11.106393  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:11 GMT
	I0813 00:13:11.106509  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:11.106852  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:11.106865  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:11.106870  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:11.106876  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:11.108684  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:11.108706  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:11.108713  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:11 GMT
	I0813 00:13:11.108719  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:11.108724  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:11.108730  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:11.108734  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:11.108928  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:11.603453  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:11.603483  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:11.603491  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:11.603496  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:11.605980  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:11.606003  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:11.606008  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:11.606011  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:11.606014  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:11.606018  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:11.606030  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:11 GMT
	I0813 00:13:11.606118  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:11.606518  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:11.606537  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:11.606544  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:11.606550  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:11.608449  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:11.608469  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:11.608474  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:11.608477  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:11.608480  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:11.608483  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:11.608486  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:11 GMT
	I0813 00:13:11.608593  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:12.103128  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:12.103155  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:12.103161  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:12.103165  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:12.105943  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:12.105967  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:12.105974  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:12.105977  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:12.105980  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:12 GMT
	I0813 00:13:12.105983  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:12.105986  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:12.106097  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:12.106495  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:12.106510  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:12.106516  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:12.106520  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:12.108667  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:12.108694  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:12.108702  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:12.108708  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:12.108713  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:12.108717  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:12.108723  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:12 GMT
	I0813 00:13:12.108846  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:12.603348  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:12.603377  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:12.603385  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:12.603390  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:12.606091  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:12.606115  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:12.606122  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:12 GMT
	I0813 00:13:12.606128  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:12.606132  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:12.606137  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:12.606142  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:12.606350  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:12.606694  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:12.606707  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:12.606712  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:12.606715  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:12.608504  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:12.608523  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:12.608529  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:12.608534  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:12.608538  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:12.608542  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:12.608546  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:12 GMT
	I0813 00:13:12.608697  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:13.103152  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:13.103182  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:13.103188  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:13.103192  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:13.105632  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:13.105659  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:13.105666  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:13.105671  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:13 GMT
	I0813 00:13:13.105676  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:13.105682  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:13.105686  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:13.105827  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:13.106251  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:13.106271  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:13.106279  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:13.106294  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:13.108343  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:13.108367  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:13.108374  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:13.108379  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:13.108384  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:13.108388  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:13.108392  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:13 GMT
	I0813 00:13:13.108509  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:13.108838  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:13.603051  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:13.603077  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:13.603103  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:13.603109  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:13.605985  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:13.606019  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:13.606025  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:13.606029  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:13.606032  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:13.606035  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:13.606039  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:13 GMT
	I0813 00:13:13.606142  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:13.606642  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:13.606662  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:13.606667  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:13.606671  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:13.608813  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:13.608842  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:13.608850  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:13.608856  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:13 GMT
	I0813 00:13:13.608860  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:13.608864  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:13.608867  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:13.609047  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:14.103264  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:14.103301  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:14.103308  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:14.103311  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:14.105973  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:14.106001  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:14.106008  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:14.106013  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:14.106018  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:14 GMT
	I0813 00:13:14.106022  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:14.106037  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:14.106208  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:14.106679  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:14.106696  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:14.106701  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:14.106705  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:14.108638  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:14.108658  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:14.108663  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:14.108667  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:14.108672  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:14.108677  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:14.108681  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:14 GMT
	I0813 00:13:14.108839  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:14.603331  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:14.603362  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:14.603368  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:14.603372  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:14.606091  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:14.606122  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:14.606130  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:14.606135  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:14 GMT
	I0813 00:13:14.606140  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:14.606143  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:14.606147  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:14.606315  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:14.606701  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:14.606715  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:14.606721  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:14.606725  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:14.608465  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:14.608501  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:14.608507  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:14.608511  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:14.608514  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:14.608518  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:14.608522  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:14 GMT
	I0813 00:13:14.608698  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:15.103272  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:15.103301  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:15.103307  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:15.103311  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:15.106232  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:15.106259  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:15.106266  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:15.106271  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:15 GMT
	I0813 00:13:15.106274  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:15.106281  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:15.106284  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:15.106403  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:15.106781  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:15.106795  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:15.106801  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:15.106805  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:15.108738  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:15.108766  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:15.108772  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:15.108778  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:15.108781  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:15 GMT
	I0813 00:13:15.108784  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:15.108788  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:15.108991  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:15.109432  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:15.603579  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:15.603608  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:15.603616  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:15.603623  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:15.607223  743232 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:13:15.607257  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:15.607263  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:15.607267  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:15.607271  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:15 GMT
	I0813 00:13:15.607275  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:15.607280  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:15.607415  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:15.607857  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:15.607872  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:15.607877  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:15.607881  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:15.609713  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:15.609732  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:15.609738  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:15.609741  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:15.609744  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:15.609747  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:15.609751  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:15 GMT
	I0813 00:13:15.609845  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:16.103485  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:16.103515  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:16.103521  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:16.103525  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:16.106194  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:16.106214  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:16.106220  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:16.106223  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:16.106231  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:16 GMT
	I0813 00:13:16.106237  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:16.106242  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:16.106892  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:16.107618  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:16.107637  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:16.107644  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:16.107650  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:16.109667  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:16.109687  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:16.109694  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:16.109699  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:16.109704  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:16.109710  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:16.109715  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:16 GMT
	I0813 00:13:16.109843  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:16.603231  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:16.603262  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:16.603268  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:16.603272  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:16.605924  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:16.605953  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:16.605960  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:16.605964  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:16.605967  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:16.605972  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:16.605977  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:16 GMT
	I0813 00:13:16.606130  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:16.606512  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:16.606528  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:16.606533  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:16.606537  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:16.608542  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:16.608566  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:16.608573  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:16.608578  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:16.608582  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:16.608587  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:16 GMT
	I0813 00:13:16.608592  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:16.608769  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:17.103310  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:17.103348  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:17.103358  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:17.103364  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:17.106392  743232 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:13:17.106420  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:17.106428  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:17.106433  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:17.106437  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:17.106442  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:17 GMT
	I0813 00:13:17.106447  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:17.106561  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:17.106928  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:17.106945  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:17.106950  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:17.106954  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:17.109336  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:17.109369  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:17.109377  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:17.109384  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:17 GMT
	I0813 00:13:17.109389  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:17.109394  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:17.109399  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:17.109530  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:17.109824  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:17.603130  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:17.603160  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:17.603169  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:17.603175  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:17.605842  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:17.605868  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:17.605874  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:17.605878  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:17.605881  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:17.605885  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:17.605888  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:17 GMT
	I0813 00:13:17.606013  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:17.606360  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:17.606368  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:17.606373  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:17.606377  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:17.608327  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:17.608347  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:17.608354  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:17.608359  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:17.608363  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:17.608370  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:17.608374  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:17 GMT
	I0813 00:13:17.608465  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:18.103411  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:18.103443  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:18.103449  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:18.103453  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:18.106346  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:18.106370  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:18.106375  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:18.106379  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:18.106383  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:18.106386  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:18.106389  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:18 GMT
	I0813 00:13:18.106531  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:18.106914  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:18.106928  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:18.106933  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:18.106937  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:18.108809  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:18.108839  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:18.108849  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:18.108855  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:18.108862  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:18.108868  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:18.108874  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:18 GMT
	I0813 00:13:18.109009  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:18.603101  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:18.603130  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:18.603143  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:18.603147  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:18.605828  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:18.605856  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:18.605863  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:18.605868  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:18.605873  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:18 GMT
	I0813 00:13:18.605877  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:18.605883  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:18.605981  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:18.606360  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:18.606375  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:18.606380  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:18.606385  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:18.608285  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:18.608310  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:18.608317  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:18.608321  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:18.608324  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:18.608328  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:18.608331  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:18 GMT
	I0813 00:13:18.608429  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:19.103041  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:19.103080  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:19.103088  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:19.103093  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:19.106055  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:19.106086  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:19.106091  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:19.106096  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:19.106101  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:19.106105  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:19.106108  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:19 GMT
	I0813 00:13:19.106271  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:19.106660  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:19.106676  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:19.106682  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:19.106686  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:19.108915  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:19.108942  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:19.108951  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:19.108955  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:19.108960  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:19 GMT
	I0813 00:13:19.108964  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:19.108970  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:19.109095  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:19.603707  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:19.603734  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:19.603741  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:19.603746  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:19.606608  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:19.606664  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:19.606672  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:19.606676  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:19.606679  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:19.606683  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:19.606686  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:19 GMT
	I0813 00:13:19.606812  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:19.607256  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:19.607274  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:19.607282  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:19.607287  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:19.609343  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:19.609368  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:19.609374  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:19 GMT
	I0813 00:13:19.609377  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:19.609381  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:19.609385  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:19.609388  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:19.609539  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:19.609837  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:20.103302  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:20.103336  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:20.103345  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:20.103351  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:20.105910  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:20.105944  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:20.105956  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:20.105962  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:20.105967  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:20.105972  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:20 GMT
	I0813 00:13:20.105976  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:20.106137  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:20.106583  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:20.106600  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:20.106606  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:20.106611  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:20.108467  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:20.108507  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:20.108514  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:20.108517  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:20.108521  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:20.108524  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:20.108527  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:20 GMT
	I0813 00:13:20.108691  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:20.603152  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:20.603185  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:20.603192  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:20.603196  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:20.606073  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:20.606100  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:20.606107  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:20.606110  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:20.606113  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:20.606116  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:20.606119  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:20 GMT
	I0813 00:13:20.606248  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:20.606646  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:20.606680  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:20.606688  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:20.606694  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:20.608781  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:20.608806  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:20.608814  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:20.608818  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:20 GMT
	I0813 00:13:20.608821  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:20.608825  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:20.608828  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:20.608971  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:21.103645  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:21.103684  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:21.103706  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:21.103713  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:21.106424  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:21.106452  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:21.106459  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:21.106464  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:21.106468  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:21.106473  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:21.106477  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:21 GMT
	I0813 00:13:21.106637  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:21.107022  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:21.107039  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:21.107046  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:21.107051  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:21.109001  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:21.109022  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:21.109029  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:21 GMT
	I0813 00:13:21.109034  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:21.109038  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:21.109043  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:21.109047  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:21.109178  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:21.603880  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:21.603910  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:21.603918  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:21.603927  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:21.606324  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:21.606358  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:21.606365  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:21.606370  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:21.606375  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:21.606381  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:21.606386  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:21 GMT
	I0813 00:13:21.606500  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:21.606848  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:21.606862  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:21.606867  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:21.606871  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:21.608677  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:21.608695  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:21.608700  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:21.608703  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:21.608706  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:21.608709  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:21 GMT
	I0813 00:13:21.608712  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:21.608809  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:22.103415  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:22.103451  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:22.103459  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:22.103465  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:22.106301  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:22.106330  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:22.106338  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:22.106343  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:22 GMT
	I0813 00:13:22.106351  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:22.106355  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:22.106360  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:22.106475  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:22.106934  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:22.106956  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:22.106963  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:22.106969  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:22.109172  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:22.109200  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:22.109208  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:22.109211  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:22.109215  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:22.109219  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:22.109273  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:22 GMT
	I0813 00:13:22.109377  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:22.109719  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:22.603926  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:22.603955  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:22.603961  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:22.603965  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:22.606718  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:22.606749  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:22.606758  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:22.606762  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:22.606766  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:22.606769  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:22.606772  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:22 GMT
	I0813 00:13:22.606955  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:22.607329  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:22.607344  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:22.607349  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:22.607354  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:22.609415  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:22.609440  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:22.609448  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:22.609453  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:22.609460  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:22 GMT
	I0813 00:13:22.609464  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:22.609469  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:22.609644  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:23.103394  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:23.103428  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:23.103434  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:23.103439  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:23.106083  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:23.106111  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:23.106120  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:23 GMT
	I0813 00:13:23.106125  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:23.106134  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:23.106138  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:23.106143  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:23.106326  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:23.106738  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:23.106753  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:23.106758  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:23.106762  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:23.108709  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:23.108733  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:23.108739  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:23.108743  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:23.108746  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:23.108749  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:23.108751  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:23 GMT
	I0813 00:13:23.108857  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:23.603398  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:23.603424  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:23.603429  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:23.603436  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:23.606339  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:23.606366  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:23.606373  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:23.606377  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:23.606381  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:23 GMT
	I0813 00:13:23.606384  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:23.606387  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:23.606514  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:23.606866  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:23.606881  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:23.606887  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:23.606891  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:23.608924  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:23.608949  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:23.608955  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:23 GMT
	I0813 00:13:23.608961  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:23.608965  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:23.608969  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:23.608974  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:23.609052  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:24.103738  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:24.103774  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:24.103780  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:24.103784  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:24.106628  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:24.106655  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:24.106662  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:24.106667  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:24 GMT
	I0813 00:13:24.106672  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:24.106676  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:24.106685  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:24.106815  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:24.107183  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:24.107200  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:24.107206  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:24.107211  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:24.109185  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:24.109209  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:24.109217  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:24.109279  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:24.109284  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:24.109288  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:24 GMT
	I0813 00:13:24.109291  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:24.109404  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:24.603992  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:24.604023  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:24.604029  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:24.604033  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:24.606610  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:24.606675  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:24.606681  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:24 GMT
	I0813 00:13:24.606686  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:24.606689  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:24.606692  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:24.606696  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:24.606790  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:24.607148  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:24.607163  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:24.607168  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:24.607172  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:24.609093  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:24.609123  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:24.609130  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:24.609136  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:24.609140  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:24.609145  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:24.609148  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:24 GMT
	I0813 00:13:24.609272  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:24.609583  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:25.103924  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:25.103956  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:25.103962  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:25.103966  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:25.106868  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:25.106899  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:25.106908  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:25.106913  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:25.106917  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:25.106926  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:25.106935  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:25 GMT
	I0813 00:13:25.107068  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:25.107479  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:25.107494  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:25.107499  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:25.107503  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:25.109376  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:25.109397  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:25.109403  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:25.109408  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:25.109413  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:25.109417  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:25 GMT
	I0813 00:13:25.109422  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:25.109684  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:25.603311  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:25.603343  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:25.603351  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:25.603357  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:25.606234  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:25.606259  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:25.606270  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:25.606275  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:25.606279  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:25.606283  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:25.606288  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:25 GMT
	I0813 00:13:25.606411  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:25.606792  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:25.606807  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:25.606812  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:25.606816  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:25.608678  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:25.608701  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:25.608706  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:25.608710  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:25 GMT
	I0813 00:13:25.608713  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:25.608716  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:25.608720  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:25.608847  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:26.103455  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:26.103488  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:26.103493  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:26.103498  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:26.106266  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:26.106295  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:26.106303  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:26.106307  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:26.106310  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:26.106317  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:26.106324  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:26 GMT
	I0813 00:13:26.106430  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:26.106832  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:26.106848  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:26.106854  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:26.106858  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:26.109046  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:26.109070  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:26.109076  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:26.109081  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:26.109084  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:26.109088  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:26.109093  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:26 GMT
	I0813 00:13:26.109208  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:26.603788  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:26.603816  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:26.603822  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:26.603827  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:26.606540  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:26.606564  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:26.606570  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:26.606577  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:26.606580  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:26.606584  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:26.606587  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:26 GMT
	I0813 00:13:26.606698  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:26.607047  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:26.607059  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:26.607064  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:26.607068  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:26.609188  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:26.609213  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:26.609220  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:26.609281  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:26 GMT
	I0813 00:13:26.609286  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:26.609291  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:26.609296  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:26.609405  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:26.609723  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:27.103995  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:27.104027  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:27.104034  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:27.104038  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:27.106522  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:27.106555  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:27.106567  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:27.106572  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:27.106577  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:27.106585  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:27.106588  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:27 GMT
	I0813 00:13:27.106725  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:27.107177  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:27.107194  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:27.107200  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:27.107204  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:27.109085  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:27.109103  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:27.109109  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:27.109114  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:27.109118  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:27.109123  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:27.109127  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:27 GMT
	I0813 00:13:27.109220  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:27.603879  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:27.603911  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:27.603918  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:27.603922  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:27.606549  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:27.606575  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:27.606583  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:27.606588  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:27.606592  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:27.606597  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:27 GMT
	I0813 00:13:27.606601  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:27.606705  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:27.607144  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:27.607160  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:27.607166  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:27.607170  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:27.609119  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:27.609134  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:27.609140  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:27 GMT
	I0813 00:13:27.609145  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:27.609149  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:27.609154  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:27.609159  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:27.609373  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:28.103492  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:28.103523  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:28.103529  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:28.103537  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:28.106228  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:28.106256  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:28.106263  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:28.106267  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:28.106271  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:28.106275  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:28.106279  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:28 GMT
	I0813 00:13:28.106384  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:28.106741  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:28.106758  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:28.106777  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:28.106784  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:28.108500  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:28.108518  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:28.108525  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:28 GMT
	I0813 00:13:28.108530  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:28.108534  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:28.108538  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:28.108543  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:28.108656  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:28.603287  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:28.603318  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:28.603325  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:28.603329  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:28.606104  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:28.606133  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:28.606140  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:28.606155  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:28.606160  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:28.606165  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:28.606170  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:28 GMT
	I0813 00:13:28.606338  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:28.606706  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:28.606732  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:28.606737  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:28.606741  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:28.608780  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:28.608800  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:28.608806  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:28 GMT
	I0813 00:13:28.608810  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:28.608814  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:28.608818  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:28.608823  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:28.608935  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:29.103573  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:29.103594  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:29.103600  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:29.103604  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:29.105945  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:29.105969  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:29.105977  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:29.105982  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:29.105987  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:29.105992  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:29 GMT
	I0813 00:13:29.105997  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:29.106170  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:29.106507  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:29.106521  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:29.106526  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:29.106595  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:29.108515  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:29.108532  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:29.108539  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:29.108543  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:29 GMT
	I0813 00:13:29.108548  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:29.108551  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:29.108555  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:29.108638  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:29.108870  743232 pod_ready.go:102] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"False"
	I0813 00:13:29.603213  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:29.603239  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:29.603247  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:29.603253  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:29.606333  743232 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:13:29.606361  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:29.606369  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:29.606374  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:29.606378  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:29.606383  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:29.606388  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:29 GMT
	I0813 00:13:29.606503  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"451","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5628 chars]
	I0813 00:13:29.606862  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:29.606878  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:29.606884  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:29.606890  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:29.608957  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:29.608978  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:29.608984  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:29 GMT
	I0813 00:13:29.608988  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:29.608991  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:29.608994  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:29.609000  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:29.609088  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:30.103756  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:30.103788  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.103794  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.103799  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.106603  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:30.106629  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.106636  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.106641  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.106646  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.106651  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.106656  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.106786  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"527","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5736 chars]
	I0813 00:13:30.107164  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:30.107178  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.107183  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.107187  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.109166  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:30.109187  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.109206  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.109212  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.109217  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.109256  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.109265  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.109366  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:30.109698  743232 pod_ready.go:92] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:30.109719  743232 pod_ready.go:81] duration metric: took 49.517086913s waiting for pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.109737  743232 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-vvdnj" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.109805  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-vvdnj
	I0813 00:13:30.109815  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.109823  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.109831  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.111567  743232 round_trippers.go:457] Response Status: 404 Not Found in 1 milliseconds
	I0813 00:13:30.111588  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.111596  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.111601  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.111606  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.111615  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.111620  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.111625  743232 round_trippers.go:463]     Content-Length: 216
	I0813 00:13:30.111648  743232 request.go:1123] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-558bd4d5db-vvdnj\" not found","reason":"NotFound","details":{"name":"coredns-558bd4d5db-vvdnj","kind":"pods"},"code":404}
	I0813 00:13:30.112038  743232 pod_ready.go:97] error getting pod "coredns-558bd4d5db-vvdnj" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-vvdnj" not found
	I0813 00:13:30.112056  743232 pod_ready.go:81] duration metric: took 2.307182ms waiting for pod "coredns-558bd4d5db-vvdnj" in "kube-system" namespace to be "Ready" ...
	E0813 00:13:30.112065  743232 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-vvdnj" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-vvdnj" not found
	I0813 00:13:30.112072  743232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.112119  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813001157-676638
	I0813 00:13:30.112127  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.112131  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.112135  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.113813  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:30.113830  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.113835  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.113840  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.113845  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.113849  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.113854  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.114032  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813001157-676638","namespace":"kube-system","uid":"7bd171f8-9ba7-465d-8320-82ca9b0fe38b","resourceVersion":"487","creationTimestamp":"2021-08-13T00:12:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"8dfde7453a6ad04def13ca08d3dd1846","kubernetes.io/config.mirror":"8dfde7453a6ad04def13ca08d3dd1846","kubernetes.io/config.seen":"2021-08-13T00:12:29.316613009Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.has [truncated 5564 chars]
	I0813 00:13:30.114363  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:30.114379  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.114386  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.114392  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.116199  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:30.116213  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.116217  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.116221  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.116223  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.116226  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.116229  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.116338  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:30.116578  743232 pod_ready.go:92] pod "etcd-multinode-20210813001157-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:30.116591  743232 pod_ready.go:81] duration metric: took 4.509655ms waiting for pod "etcd-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.116607  743232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.116654  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813001157-676638
	I0813 00:13:30.116664  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.116670  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.116677  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.118494  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:30.118513  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.118520  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.118525  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.118530  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.118535  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.118540  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.118702  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813001157-676638","namespace":"kube-system","uid":"5ccefdb1-48ae-4825-ab83-3c233583f503","resourceVersion":"340","creationTimestamp":"2021-08-13T00:12:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"3509319e0214f60b63092919a691f0e6","kubernetes.io/config.mirror":"3509319e0214f60b63092919a691f0e6","kubernetes.io/config.seen":"2021-08-13T00:12:29.316635432Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addres [truncated 8091 chars]
	I0813 00:13:30.118995  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:30.119006  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.119011  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.119015  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.120642  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:30.120657  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.120661  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.120665  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.120668  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.120671  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.120674  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.120822  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:30.121130  743232 pod_ready.go:92] pod "kube-apiserver-multinode-20210813001157-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:30.121143  743232 pod_ready.go:81] duration metric: took 4.527204ms waiting for pod "kube-apiserver-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.121157  743232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.121220  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813001157-676638
	I0813 00:13:30.121277  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.121284  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.121290  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.123046  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:30.123061  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.123065  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.123068  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.123072  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.123077  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.123080  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.123188  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813001157-676638","namespace":"kube-system","uid":"9799e08d-2fec-43b7-b6f7-fecee59f7bfe","resourceVersion":"331","creationTimestamp":"2021-08-13T00:12:29Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fce4605954dd6767ca408495896d3089","kubernetes.io/config.mirror":"fce4605954dd6767ca408495896d3089","kubernetes.io/config.seen":"2021-08-13T00:12:29.316636751Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7657 chars]
	I0813 00:13:30.123502  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:30.123517  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.123522  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.123526  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.125140  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:30.125175  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.125182  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.125188  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.125192  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.125197  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.125201  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.125361  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:30.125692  743232 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813001157-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:30.125718  743232 pod_ready.go:81] duration metric: took 4.552185ms waiting for pod "kube-controller-manager-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.125736  743232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mkg5f" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.125797  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkg5f
	I0813 00:13:30.125808  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.125817  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.125826  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.127506  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:30.127523  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.127528  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.127533  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.127537  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.127542  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.127546  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.127668  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mkg5f","generateName":"kube-proxy-","namespace":"kube-system","uid":"c0ce1ac6-2a65-4491-b750-c72877628ba1","resourceVersion":"482","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"31bf8065-dab9-4025-90c4-cbefb4e70b3f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31bf8065-dab9-4025-90c4-cbefb4e70b3f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5756 chars]
	I0813 00:13:30.304446  743232 request.go:600] Waited for 176.374979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:30.304514  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:30.304520  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.304526  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.304530  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.307158  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:30.307180  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.307187  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.307191  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.307195  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.307200  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.307204  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.307383  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:30.307788  743232 pod_ready.go:92] pod "kube-proxy-mkg5f" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:30.307818  743232 pod_ready.go:81] duration metric: took 182.069104ms waiting for pod "kube-proxy-mkg5f" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.307833  743232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.504334  743232 request.go:600] Waited for 196.40921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813001157-676638
	I0813 00:13:30.504416  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813001157-676638
	I0813 00:13:30.504423  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.504428  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.504433  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.507077  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:30.507101  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.507109  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.507114  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.507118  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.507126  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.507130  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.507243  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813001157-676638","namespace":"kube-system","uid":"e9679488-3572-4ab9-bd26-84259c8744e1","resourceVersion":"329","creationTimestamp":"2021-08-13T00:12:29Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f00a7319b0df0d51bb2b1da342fbbf3","kubernetes.io/config.mirror":"8f00a7319b0df0d51bb2b1da342fbbf3","kubernetes.io/config.seen":"2021-08-13T00:12:29.316637767Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4539 chars]
	I0813 00:13:30.704682  743232 request.go:600] Waited for 197.098904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:30.704752  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:30.704758  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.704763  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.704767  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.707372  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:30.707403  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.707411  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.707415  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.707419  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.707424  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.707428  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.707541  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:30.707924  743232 pod_ready.go:92] pod "kube-scheduler-multinode-20210813001157-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:30.707952  743232 pod_ready.go:81] duration metric: took 400.107786ms waiting for pod "kube-scheduler-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:30.707964  743232 pod_ready.go:38] duration metric: took 50.132148514s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:13:30.707991  743232 api_server.go:50] waiting for apiserver process to appear ...
	I0813 00:13:30.708043  743232 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:13:30.729778  743232 command_runner.go:124] > 1301
	I0813 00:13:30.730650  743232 api_server.go:70] duration metric: took 50.179816981s to wait for apiserver process to appear ...
	I0813 00:13:30.730670  743232 api_server.go:86] waiting for apiserver healthz status ...
	I0813 00:13:30.730681  743232 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 00:13:30.735338  743232 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 00:13:30.735448  743232 round_trippers.go:432] GET https://192.168.49.2:8443/version?timeout=32s
	I0813 00:13:30.735459  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.735467  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.735474  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.736191  743232 round_trippers.go:457] Response Status: 200 OK in 0 milliseconds
	I0813 00:13:30.736208  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.736212  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.736216  743232 round_trippers.go:463]     Content-Length: 263
	I0813 00:13:30.736219  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.736222  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.736225  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.736227  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.736255  743232 request.go:1123] Response Body: {
	  "major": "1",
	  "minor": "21",
	  "gitVersion": "v1.21.3",
	  "gitCommit": "ca643a4d1f7bfe34773c74f79527be4afd95bf39",
	  "gitTreeState": "clean",
	  "buildDate": "2021-07-15T20:59:07Z",
	  "goVersion": "go1.16.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0813 00:13:30.736354  743232 api_server.go:139] control plane version: v1.21.3
	I0813 00:13:30.736370  743232 api_server.go:129] duration metric: took 5.694684ms to wait for apiserver health ...
	I0813 00:13:30.736377  743232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 00:13:30.904803  743232 request.go:600] Waited for 168.346504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 00:13:30.904900  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 00:13:30.904909  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:30.904918  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:30.904926  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:30.908244  743232 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:13:30.908271  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:30.908278  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:30.908283  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:30.908288  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:30.908292  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:30 GMT
	I0813 00:13:30.908297  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:30.908749  743232 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"527","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 54528 chars]
	I0813 00:13:30.910082  743232 system_pods.go:59] 8 kube-system pods found
	I0813 00:13:30.910107  743232 system_pods.go:61] "coredns-558bd4d5db-n8vmn" [8c6390c7-df4c-4cd1-9668-f97565c6ff6c] Running
	I0813 00:13:30.910112  743232 system_pods.go:61] "etcd-multinode-20210813001157-676638" [7bd171f8-9ba7-465d-8320-82ca9b0fe38b] Running
	I0813 00:13:30.910116  743232 system_pods.go:61] "kindnet-zhxmb" [7155e0a4-3055-4ba8-a352-0298b40e017e] Running
	I0813 00:13:30.910120  743232 system_pods.go:61] "kube-apiserver-multinode-20210813001157-676638" [5ccefdb1-48ae-4825-ab83-3c233583f503] Running
	I0813 00:13:30.910124  743232 system_pods.go:61] "kube-controller-manager-multinode-20210813001157-676638" [9799e08d-2fec-43b7-b6f7-fecee59f7bfe] Running
	I0813 00:13:30.910133  743232 system_pods.go:61] "kube-proxy-mkg5f" [c0ce1ac6-2a65-4491-b750-c72877628ba1] Running
	I0813 00:13:30.910137  743232 system_pods.go:61] "kube-scheduler-multinode-20210813001157-676638" [e9679488-3572-4ab9-bd26-84259c8744e1] Running
	I0813 00:13:30.910141  743232 system_pods.go:61] "storage-provisioner" [5ea28bd2-65e3-48c2-9aad-361791248c9a] Running
	I0813 00:13:30.910145  743232 system_pods.go:74] duration metric: took 173.763528ms to wait for pod list to return data ...
	I0813 00:13:30.910153  743232 default_sa.go:34] waiting for default service account to be created ...
	I0813 00:13:31.104644  743232 request.go:600] Waited for 194.378181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0813 00:13:31.104711  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0813 00:13:31.104717  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:31.104723  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:31.104727  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:31.107316  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:31.107341  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:31.107347  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:31.107350  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:31.107354  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:31.107357  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:31.107361  743232 round_trippers.go:463]     Content-Length: 304
	I0813 00:13:31.107364  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:31 GMT
	I0813 00:13:31.107388  743232 request.go:1123] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"8e79ced5-0cdc-4539-b9d8-84793d98e9ed","resourceVersion":"408","creationTimestamp":"2021-08-13T00:12:39Z"},"secrets":[{"name":"default-token-xqhgl"}]}]}
	I0813 00:13:31.108001  743232 default_sa.go:45] found service account: "default"
	I0813 00:13:31.108025  743232 default_sa.go:55] duration metric: took 197.866336ms for default service account to be created ...
	I0813 00:13:31.108034  743232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 00:13:31.304510  743232 request.go:600] Waited for 196.403883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 00:13:31.304612  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 00:13:31.304621  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:31.304632  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:31.304658  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:31.308229  743232 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:13:31.308260  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:31.308269  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:31.308274  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:31.308279  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:31 GMT
	I0813 00:13:31.308284  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:31.308288  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:31.308788  743232 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"527","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 54528 chars]
	I0813 00:13:31.310090  743232 system_pods.go:86] 8 kube-system pods found
	I0813 00:13:31.310119  743232 system_pods.go:89] "coredns-558bd4d5db-n8vmn" [8c6390c7-df4c-4cd1-9668-f97565c6ff6c] Running
	I0813 00:13:31.310125  743232 system_pods.go:89] "etcd-multinode-20210813001157-676638" [7bd171f8-9ba7-465d-8320-82ca9b0fe38b] Running
	I0813 00:13:31.310129  743232 system_pods.go:89] "kindnet-zhxmb" [7155e0a4-3055-4ba8-a352-0298b40e017e] Running
	I0813 00:13:31.310134  743232 system_pods.go:89] "kube-apiserver-multinode-20210813001157-676638" [5ccefdb1-48ae-4825-ab83-3c233583f503] Running
	I0813 00:13:31.310139  743232 system_pods.go:89] "kube-controller-manager-multinode-20210813001157-676638" [9799e08d-2fec-43b7-b6f7-fecee59f7bfe] Running
	I0813 00:13:31.310147  743232 system_pods.go:89] "kube-proxy-mkg5f" [c0ce1ac6-2a65-4491-b750-c72877628ba1] Running
	I0813 00:13:31.310151  743232 system_pods.go:89] "kube-scheduler-multinode-20210813001157-676638" [e9679488-3572-4ab9-bd26-84259c8744e1] Running
	I0813 00:13:31.310160  743232 system_pods.go:89] "storage-provisioner" [5ea28bd2-65e3-48c2-9aad-361791248c9a] Running
	I0813 00:13:31.310168  743232 system_pods.go:126] duration metric: took 202.129682ms to wait for k8s-apps to be running ...
	I0813 00:13:31.310177  743232 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 00:13:31.310229  743232 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:13:31.321006  743232 system_svc.go:56] duration metric: took 10.816886ms WaitForService to wait for kubelet.
	I0813 00:13:31.321040  743232 kubeadm.go:547] duration metric: took 50.770210335s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 00:13:31.321065  743232 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:13:31.504517  743232 request.go:600] Waited for 183.35587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0813 00:13:31.504605  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0813 00:13:31.504612  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:31.504617  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:31.504640  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:31.507167  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:31.507195  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:31.507202  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:31.507207  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:31.507211  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:31.507216  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:31 GMT
	I0813 00:13:31.507221  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:31.507348  743232 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"533"},"items":[{"metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-mana
ged-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","opera [truncated 6657 chars]
	I0813 00:13:31.508504  743232 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 00:13:31.508533  743232 node_conditions.go:123] node cpu capacity is 8
	I0813 00:13:31.508550  743232 node_conditions.go:105] duration metric: took 187.477515ms to run NodePressure ...
	I0813 00:13:31.508570  743232 start.go:231] waiting for startup goroutines ...
	I0813 00:13:31.511639  743232 out.go:177] 
	I0813 00:13:31.511968  743232 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/config.json ...
	I0813 00:13:31.514321  743232 out.go:177] * Starting node multinode-20210813001157-676638-m02 in cluster multinode-20210813001157-676638
	I0813 00:13:31.514356  743232 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 00:13:31.515981  743232 out.go:177] * Pulling base image ...
	I0813 00:13:31.516019  743232 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:13:31.516038  743232 cache.go:56] Caching tarball of preloaded images
	I0813 00:13:31.516118  743232 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 00:13:31.516194  743232 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 00:13:31.516231  743232 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 00:13:31.516335  743232 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/config.json ...
	I0813 00:13:31.611008  743232 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 00:13:31.611047  743232 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 00:13:31.611064  743232 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:13:31.611105  743232 start.go:313] acquiring machines lock for multinode-20210813001157-676638-m02: {Name:mkdd036ed32bd934982fb03c89029f7790d918ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:13:31.611287  743232 start.go:317] acquired machines lock for "multinode-20210813001157-676638-m02" in 155.735µs
	I0813 00:13:31.611321  743232 start.go:89] Provisioning new machine with config: &{Name:multinode-20210813001157-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813001157-676638 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 00:13:31.611398  743232 start.go:126] createHost starting for "m02" (driver="docker")
	I0813 00:13:31.614446  743232 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 00:13:31.614576  743232 start.go:160] libmachine.API.Create for "multinode-20210813001157-676638" (driver="docker")
	I0813 00:13:31.614606  743232 client.go:168] LocalClient.Create starting
	I0813 00:13:31.614680  743232 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 00:13:31.614726  743232 main.go:130] libmachine: Decoding PEM data...
	I0813 00:13:31.615131  743232 main.go:130] libmachine: Parsing certificate...
	I0813 00:13:31.615346  743232 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 00:13:31.615420  743232 main.go:130] libmachine: Decoding PEM data...
	I0813 00:13:31.615460  743232 main.go:130] libmachine: Parsing certificate...
	I0813 00:13:31.615884  743232 cli_runner.go:115] Run: docker network inspect multinode-20210813001157-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 00:13:31.657533  743232 network_create.go:67] Found existing network {name:multinode-20210813001157-676638 subnet:0xc00067a060 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0813 00:13:31.657588  743232 kic.go:106] calculated static IP "192.168.49.3" for the "multinode-20210813001157-676638-m02" container
	I0813 00:13:31.657661  743232 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 00:13:31.699436  743232 cli_runner.go:115] Run: docker volume create multinode-20210813001157-676638-m02 --label name.minikube.sigs.k8s.io=multinode-20210813001157-676638-m02 --label created_by.minikube.sigs.k8s.io=true
	I0813 00:13:31.745122  743232 oci.go:102] Successfully created a docker volume multinode-20210813001157-676638-m02
	I0813 00:13:31.745275  743232 cli_runner.go:115] Run: docker run --rm --name multinode-20210813001157-676638-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210813001157-676638-m02 --entrypoint /usr/bin/test -v multinode-20210813001157-676638-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 00:13:32.503786  743232 oci.go:106] Successfully prepared a docker volume multinode-20210813001157-676638-m02
	W0813 00:13:32.503846  743232 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 00:13:32.503855  743232 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 00:13:32.503910  743232 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 00:13:32.503939  743232 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:13:32.503975  743232 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 00:13:32.504065  743232 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210813001157-676638-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 00:13:32.595236  743232 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20210813001157-676638-m02 --name multinode-20210813001157-676638-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20210813001157-676638-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20210813001157-676638-m02 --network multinode-20210813001157-676638 --ip 192.168.49.3 --volume multinode-20210813001157-676638-m02:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 00:13:33.091831  743232 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638-m02 --format={{.State.Running}}
	I0813 00:13:33.145535  743232 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638-m02 --format={{.State.Status}}
	I0813 00:13:33.195467  743232 cli_runner.go:115] Run: docker exec multinode-20210813001157-676638-m02 stat /var/lib/dpkg/alternatives/iptables
	I0813 00:13:33.345597  743232 oci.go:278] the created container "multinode-20210813001157-676638-m02" has a running status.
	I0813 00:13:33.345639  743232 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638-m02/id_rsa...
	I0813 00:13:33.455304  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0813 00:13:33.455351  743232 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 00:13:33.843756  743232 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638-m02 --format={{.State.Status}}
	I0813 00:13:33.886616  743232 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 00:13:33.886642  743232 kic_runner.go:115] Args: [docker exec --privileged multinode-20210813001157-676638-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 00:13:36.283908  743232 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20210813001157-676638-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (3.77979526s)
	I0813 00:13:36.283945  743232 kic.go:188] duration metric: took 3.779968 seconds to extract preloaded images to volume
	I0813 00:13:36.284035  743232 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638-m02 --format={{.State.Status}}
	I0813 00:13:36.324988  743232 machine.go:88] provisioning docker machine ...
	I0813 00:13:36.325037  743232 ubuntu.go:169] provisioning hostname "multinode-20210813001157-676638-m02"
	I0813 00:13:36.325114  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638-m02
	I0813 00:13:36.366347  743232 main.go:130] libmachine: Using SSH client type: native
	I0813 00:13:36.366564  743232 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33298 <nil> <nil>}
	I0813 00:13:36.366581  743232 main.go:130] libmachine: About to run SSH command:
	sudo hostname multinode-20210813001157-676638-m02 && echo "multinode-20210813001157-676638-m02" | sudo tee /etc/hostname
	I0813 00:13:36.498367  743232 main.go:130] libmachine: SSH cmd err, output: <nil>: multinode-20210813001157-676638-m02
	
	I0813 00:13:36.498457  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638-m02
	I0813 00:13:36.538824  743232 main.go:130] libmachine: Using SSH client type: native
	I0813 00:13:36.539032  743232 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33298 <nil> <nil>}
	I0813 00:13:36.539057  743232 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20210813001157-676638-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20210813001157-676638-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20210813001157-676638-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:13:36.653430  743232 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:13:36.653464  743232 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:13:36.653492  743232 ubuntu.go:177] setting up certificates
	I0813 00:13:36.653510  743232 provision.go:83] configureAuth start
	I0813 00:13:36.653573  743232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813001157-676638-m02
	I0813 00:13:36.697337  743232 provision.go:137] copyHostCerts
	I0813 00:13:36.697390  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:13:36.697433  743232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:13:36.697451  743232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:13:36.697532  743232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:13:36.697691  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:13:36.697775  743232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:13:36.697791  743232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:13:36.697852  743232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0813 00:13:36.697925  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:13:36.697953  743232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:13:36.697962  743232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:13:36.697992  743232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1082 bytes)
	I0813 00:13:36.698053  743232 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.multinode-20210813001157-676638-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20210813001157-676638-m02]
	I0813 00:13:36.996557  743232 provision.go:171] copyRemoteCerts
	I0813 00:13:36.996647  743232 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:13:36.996703  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638-m02
	I0813 00:13:37.039005  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33298 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638-m02/id_rsa Username:docker}
	I0813 00:13:37.125393  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0813 00:13:37.125451  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 00:13:37.142375  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0813 00:13:37.142441  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0813 00:13:37.159479  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0813 00:13:37.159537  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 00:13:37.176103  743232 provision.go:86] duration metric: configureAuth took 522.576084ms
	I0813 00:13:37.176129  743232 ubuntu.go:193] setting minikube options for container-runtime
	I0813 00:13:37.176397  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638-m02
	I0813 00:13:37.219983  743232 main.go:130] libmachine: Using SSH client type: native
	I0813 00:13:37.220172  743232 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33298 <nil> <nil>}
	I0813 00:13:37.220188  743232 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:13:37.595519  743232 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:13:37.595553  743232 machine.go:91] provisioned docker machine in 1.270536318s
	I0813 00:13:37.595565  743232 client.go:171] LocalClient.Create took 5.980951194s
	I0813 00:13:37.595583  743232 start.go:168] duration metric: libmachine.API.Create for "multinode-20210813001157-676638" took 5.981007705s
	I0813 00:13:37.595593  743232 start.go:267] post-start starting for "multinode-20210813001157-676638-m02" (driver="docker")
	I0813 00:13:37.595600  743232 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:13:37.595668  743232 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:13:37.595710  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638-m02
	I0813 00:13:37.637401  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33298 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638-m02/id_rsa Username:docker}
	I0813 00:13:37.725552  743232 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:13:37.728805  743232 command_runner.go:124] > NAME="Ubuntu"
	I0813 00:13:37.728831  743232 command_runner.go:124] > VERSION="20.04.2 LTS (Focal Fossa)"
	I0813 00:13:37.728836  743232 command_runner.go:124] > ID=ubuntu
	I0813 00:13:37.728842  743232 command_runner.go:124] > ID_LIKE=debian
	I0813 00:13:37.728848  743232 command_runner.go:124] > PRETTY_NAME="Ubuntu 20.04.2 LTS"
	I0813 00:13:37.728855  743232 command_runner.go:124] > VERSION_ID="20.04"
	I0813 00:13:37.728877  743232 command_runner.go:124] > HOME_URL="https://www.ubuntu.com/"
	I0813 00:13:37.728888  743232 command_runner.go:124] > SUPPORT_URL="https://help.ubuntu.com/"
	I0813 00:13:37.728899  743232 command_runner.go:124] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0813 00:13:37.728917  743232 command_runner.go:124] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0813 00:13:37.728927  743232 command_runner.go:124] > VERSION_CODENAME=focal
	I0813 00:13:37.728935  743232 command_runner.go:124] > UBUNTU_CODENAME=focal
	I0813 00:13:37.729062  743232 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 00:13:37.729082  743232 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 00:13:37.729094  743232 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 00:13:37.729104  743232 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 00:13:37.729120  743232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:13:37.729187  743232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:13:37.729365  743232 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> 6766382.pem in /etc/ssl/certs
	I0813 00:13:37.729379  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> /etc/ssl/certs/6766382.pem
	I0813 00:13:37.729506  743232 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:13:37.736600  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:13:37.753920  743232 start.go:270] post-start completed in 158.310642ms
	I0813 00:13:37.754327  743232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813001157-676638-m02
	I0813 00:13:37.795185  743232 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/config.json ...
	I0813 00:13:37.795430  743232 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:13:37.795475  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638-m02
	I0813 00:13:37.837396  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33298 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638-m02/id_rsa Username:docker}
	I0813 00:13:37.922249  743232 command_runner.go:124] > 30%!
	(MISSING)I0813 00:13:37.922291  743232 start.go:129] duration metric: createHost completed in 6.310883374s
	I0813 00:13:37.922303  743232 start.go:80] releasing machines lock for "multinode-20210813001157-676638-m02", held for 6.310997959s
	I0813 00:13:37.922387  743232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813001157-676638-m02
	I0813 00:13:37.967660  743232 out.go:177] * Found network options:
	I0813 00:13:37.969753  743232 out.go:177]   - NO_PROXY=192.168.49.2
	W0813 00:13:37.969813  743232 proxy.go:118] fail to check proxy env: Error ip not in block
	W0813 00:13:37.969848  743232 proxy.go:118] fail to check proxy env: Error ip not in block
	I0813 00:13:37.969963  743232 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:13:37.970004  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638-m02
	I0813 00:13:37.970013  743232 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:13:37.970088  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638-m02
	I0813 00:13:38.013574  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33298 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638-m02/id_rsa Username:docker}
	I0813 00:13:38.014647  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33298 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638-m02/id_rsa Username:docker}
	I0813 00:13:38.131656  743232 command_runner.go:124] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0813 00:13:38.131683  743232 command_runner.go:124] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0813 00:13:38.131690  743232 command_runner.go:124] > <H1>302 Moved</H1>
	I0813 00:13:38.131695  743232 command_runner.go:124] > The document has moved
	I0813 00:13:38.131701  743232 command_runner.go:124] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0813 00:13:38.131705  743232 command_runner.go:124] > </BODY></HTML>
	I0813 00:13:38.131803  743232 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:13:38.143062  743232 docker.go:153] disabling docker service ...
	I0813 00:13:38.143121  743232 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:13:38.154065  743232 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:13:38.164629  743232 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:13:38.230327  743232 command_runner.go:124] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0813 00:13:38.230422  743232 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:13:38.299046  743232 command_runner.go:124] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0813 00:13:38.299133  743232 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:13:38.308438  743232 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:13:38.320169  743232 command_runner.go:124] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:13:38.320195  743232 command_runner.go:124] > image-endpoint: unix:///var/run/crio/crio.sock
	I0813 00:13:38.320798  743232 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 00:13:38.328478  743232 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 00:13:38.328507  743232 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 00:13:38.336650  743232 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:13:38.342859  743232 command_runner.go:124] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:13:38.342899  743232 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:13:38.342936  743232 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:13:38.349906  743232 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:13:38.355960  743232 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:13:38.416439  743232 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:13:38.426895  743232 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:13:38.426975  743232 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:13:38.430467  743232 command_runner.go:124] >   File: /var/run/crio/crio.sock
	I0813 00:13:38.430501  743232 command_runner.go:124] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0813 00:13:38.430512  743232 command_runner.go:124] > Device: 100005h/1048581d	Inode: 3523436     Links: 1
	I0813 00:13:38.430523  743232 command_runner.go:124] > Access: (0755/srwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:13:38.430533  743232 command_runner.go:124] > Access: 2021-08-13 00:13:37.581902069 +0000
	I0813 00:13:38.430542  743232 command_runner.go:124] > Modify: 2021-08-13 00:13:37.581902069 +0000
	I0813 00:13:38.430550  743232 command_runner.go:124] > Change: 2021-08-13 00:13:37.581902069 +0000
	I0813 00:13:38.430562  743232 command_runner.go:124] >  Birth: -
	I0813 00:13:38.430591  743232 start.go:417] Will wait 60s for crictl version
	I0813 00:13:38.430651  743232 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:13:38.459970  743232 command_runner.go:124] > Version:  0.1.0
	I0813 00:13:38.459999  743232 command_runner.go:124] > RuntimeName:  cri-o
	I0813 00:13:38.460006  743232 command_runner.go:124] > RuntimeVersion:  1.20.3
	I0813 00:13:38.460015  743232 command_runner.go:124] > RuntimeApiVersion:  v1alpha1
	I0813 00:13:38.461713  743232 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 00:13:38.461792  743232 ssh_runner.go:149] Run: crio --version
	I0813 00:13:38.524900  743232 command_runner.go:124] ! time="2021-08-13T00:13:38Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 00:13:38.526493  743232 command_runner.go:124] > crio version 1.20.3
	I0813 00:13:38.526512  743232 command_runner.go:124] > Version:       1.20.3
	I0813 00:13:38.526519  743232 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0813 00:13:38.526523  743232 command_runner.go:124] > GitTreeState:  clean
	I0813 00:13:38.526530  743232 command_runner.go:124] > BuildDate:     2021-06-03T20:25:45Z
	I0813 00:13:38.526534  743232 command_runner.go:124] > GoVersion:     go1.15.2
	I0813 00:13:38.526539  743232 command_runner.go:124] > Compiler:      gc
	I0813 00:13:38.526543  743232 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:13:38.526548  743232 command_runner.go:124] > Linkmode:      dynamic
	I0813 00:13:38.526621  743232 ssh_runner.go:149] Run: crio --version
	I0813 00:13:38.591664  743232 command_runner.go:124] > crio version 1.20.3
	I0813 00:13:38.591698  743232 command_runner.go:124] > Version:       1.20.3
	I0813 00:13:38.591709  743232 command_runner.go:124] > GitCommit:     50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d
	I0813 00:13:38.591715  743232 command_runner.go:124] > GitTreeState:  clean
	I0813 00:13:38.591723  743232 command_runner.go:124] > BuildDate:     2021-06-03T20:25:45Z
	I0813 00:13:38.591730  743232 command_runner.go:124] > GoVersion:     go1.15.2
	I0813 00:13:38.591735  743232 command_runner.go:124] > Compiler:      gc
	I0813 00:13:38.591743  743232 command_runner.go:124] > Platform:      linux/amd64
	I0813 00:13:38.591750  743232 command_runner.go:124] > Linkmode:      dynamic
	I0813 00:13:38.592939  743232 command_runner.go:124] ! time="2021-08-13T00:13:38Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 00:13:38.595871  743232 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 00:13:38.597529  743232 out.go:177]   - env NO_PROXY=192.168.49.2
	I0813 00:13:38.597641  743232 cli_runner.go:115] Run: docker network inspect multinode-20210813001157-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 00:13:38.637970  743232 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 00:13:38.641790  743232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:13:38.651591  743232 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638 for IP: 192.168.49.3
	I0813 00:13:38.651645  743232 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:13:38.651660  743232 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:13:38.651673  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0813 00:13:38.651688  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0813 00:13:38.651700  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0813 00:13:38.651711  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0813 00:13:38.651767  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem (1338 bytes)
	W0813 00:13:38.651803  743232 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638_empty.pem, impossibly tiny 0 bytes
	I0813 00:13:38.651816  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 00:13:38.651839  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1082 bytes)
	I0813 00:13:38.651872  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:13:38.651899  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
	I0813 00:13:38.651946  743232 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:13:38.651972  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> /usr/share/ca-certificates/6766382.pem
	I0813 00:13:38.651985  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:13:38.651997  743232 vm_assets.go:99] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem -> /usr/share/ca-certificates/676638.pem
	I0813 00:13:38.652366  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:13:38.670670  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:13:38.688452  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:13:38.706216  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:13:38.724088  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /usr/share/ca-certificates/6766382.pem (1708 bytes)
	I0813 00:13:38.742657  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:13:38.761036  743232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem --> /usr/share/ca-certificates/676638.pem (1338 bytes)
	I0813 00:13:38.780438  743232 ssh_runner.go:149] Run: openssl version
	I0813 00:13:38.785276  743232 command_runner.go:124] > OpenSSL 1.1.1f  31 Mar 2020
	I0813 00:13:38.785438  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6766382.pem && ln -fs /usr/share/ca-certificates/6766382.pem /etc/ssl/certs/6766382.pem"
	I0813 00:13:38.793997  743232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6766382.pem
	I0813 00:13:38.797830  743232 command_runner.go:124] > -rw-r--r-- 1 root root 1708 Aug 13 00:05 /usr/share/ca-certificates/6766382.pem
	I0813 00:13:38.797886  743232 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 00:05 /usr/share/ca-certificates/6766382.pem
	I0813 00:13:38.797925  743232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6766382.pem
	I0813 00:13:38.803064  743232 command_runner.go:124] > 3ec20f2e
	I0813 00:13:38.803365  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6766382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:13:38.811848  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:13:38.819685  743232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:13:38.823067  743232 command_runner.go:124] > -rw-r--r-- 1 root root 1111 Aug 12 23:55 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:13:38.823118  743232 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:55 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:13:38.823165  743232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:13:38.828198  743232 command_runner.go:124] > b5213941
	I0813 00:13:38.828308  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:13:38.835958  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/676638.pem && ln -fs /usr/share/ca-certificates/676638.pem /etc/ssl/certs/676638.pem"
	I0813 00:13:38.843729  743232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/676638.pem
	I0813 00:13:38.846837  743232 command_runner.go:124] > -rw-r--r-- 1 root root 1338 Aug 13 00:05 /usr/share/ca-certificates/676638.pem
	I0813 00:13:38.846888  743232 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 00:05 /usr/share/ca-certificates/676638.pem
	I0813 00:13:38.846921  743232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/676638.pem
	I0813 00:13:38.851552  743232 command_runner.go:124] > 51391683
	I0813 00:13:38.851701  743232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/676638.pem /etc/ssl/certs/51391683.0"
	I0813 00:13:38.859009  743232 ssh_runner.go:149] Run: crio config
	I0813 00:13:38.925012  743232 command_runner.go:124] ! time="2021-08-13T00:13:38Z" level=info msg="Starting CRI-O, version: 1.20.3, git: 50065140109e8dc4b8fd6dc5d2b587e5cb7ed79d(clean)"
	I0813 00:13:38.928263  743232 command_runner.go:124] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0813 00:13:38.930868  743232 command_runner.go:124] > # The CRI-O configuration file specifies all of the available configuration
	I0813 00:13:38.930899  743232 command_runner.go:124] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0813 00:13:38.930911  743232 command_runner.go:124] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0813 00:13:38.930919  743232 command_runner.go:124] > #
	I0813 00:13:38.930931  743232 command_runner.go:124] > # Please refer to crio.conf(5) for details of all configuration options.
	I0813 00:13:38.930943  743232 command_runner.go:124] > # CRI-O supports partial configuration reload during runtime, which can be
	I0813 00:13:38.930952  743232 command_runner.go:124] > # done by sending SIGHUP to the running process. Currently supported options
	I0813 00:13:38.930962  743232 command_runner.go:124] > # are explicitly mentioned with: 'This option supports live configuration
	I0813 00:13:38.930970  743232 command_runner.go:124] > # reload'.
	I0813 00:13:38.930983  743232 command_runner.go:124] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0813 00:13:38.930992  743232 command_runner.go:124] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0813 00:13:38.931002  743232 command_runner.go:124] > # you want to change the system's defaults. If you want to modify storage just
	I0813 00:13:38.931010  743232 command_runner.go:124] > # for CRI-O, you can change the storage configuration options here.
	I0813 00:13:38.931016  743232 command_runner.go:124] > [crio]
	I0813 00:13:38.931023  743232 command_runner.go:124] > # Path to the "root directory". CRI-O stores all of its data, including
	I0813 00:13:38.931031  743232 command_runner.go:124] > # containers images, in this directory.
	I0813 00:13:38.931038  743232 command_runner.go:124] > #root = "/var/lib/containers/storage"
	I0813 00:13:38.931049  743232 command_runner.go:124] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0813 00:13:38.931056  743232 command_runner.go:124] > #runroot = "/run/containers/storage"
	I0813 00:13:38.931063  743232 command_runner.go:124] > # Storage driver used to manage the storage of images and containers. Please
	I0813 00:13:38.931072  743232 command_runner.go:124] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0813 00:13:38.931080  743232 command_runner.go:124] > #storage_driver = "overlay"
	I0813 00:13:38.931086  743232 command_runner.go:124] > # List to pass options to the storage driver. Please refer to
	I0813 00:13:38.931094  743232 command_runner.go:124] > # containers-storage.conf(5) to see all available storage options.
	I0813 00:13:38.931101  743232 command_runner.go:124] > #storage_option = [
	I0813 00:13:38.931106  743232 command_runner.go:124] > #	"overlay.mountopt=nodev",
	I0813 00:13:38.931109  743232 command_runner.go:124] > #]
	I0813 00:13:38.931116  743232 command_runner.go:124] > # The default log directory where all logs will go unless directly specified by
	I0813 00:13:38.931124  743232 command_runner.go:124] > # the kubelet. The log directory specified must be an absolute directory.
	I0813 00:13:38.931131  743232 command_runner.go:124] > log_dir = "/var/log/crio/pods"
	I0813 00:13:38.931137  743232 command_runner.go:124] > # Location for CRI-O to lay down the temporary version file.
	I0813 00:13:38.931147  743232 command_runner.go:124] > # It is used to check if crio wipe should wipe containers, which should
	I0813 00:13:38.931155  743232 command_runner.go:124] > # always happen on a node reboot
	I0813 00:13:38.931163  743232 command_runner.go:124] > version_file = "/var/run/crio/version"
	I0813 00:13:38.931172  743232 command_runner.go:124] > # Location for CRI-O to lay down the persistent version file.
	I0813 00:13:38.931181  743232 command_runner.go:124] > # It is used to check if crio wipe should wipe images, which should
	I0813 00:13:38.931188  743232 command_runner.go:124] > # only happen when CRI-O has been upgraded
	I0813 00:13:38.931203  743232 command_runner.go:124] > version_file_persist = "/var/lib/crio/version"
	I0813 00:13:38.931215  743232 command_runner.go:124] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0813 00:13:38.931219  743232 command_runner.go:124] > [crio.api]
	I0813 00:13:38.931225  743232 command_runner.go:124] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0813 00:13:38.931232  743232 command_runner.go:124] > listen = "/var/run/crio/crio.sock"
	I0813 00:13:38.931237  743232 command_runner.go:124] > # IP address on which the stream server will listen.
	I0813 00:13:38.931244  743232 command_runner.go:124] > stream_address = "127.0.0.1"
	I0813 00:13:38.931251  743232 command_runner.go:124] > # The port on which the stream server will listen. If the port is set to "0", then
	I0813 00:13:38.931259  743232 command_runner.go:124] > # CRI-O will allocate a random free port number.
	I0813 00:13:38.931262  743232 command_runner.go:124] > stream_port = "0"
	I0813 00:13:38.931268  743232 command_runner.go:124] > # Enable encrypted TLS transport of the stream server.
	I0813 00:13:38.931274  743232 command_runner.go:124] > stream_enable_tls = false
	I0813 00:13:38.931280  743232 command_runner.go:124] > # Length of time until open streams terminate due to lack of activity
	I0813 00:13:38.931287  743232 command_runner.go:124] > stream_idle_timeout = ""
	I0813 00:13:38.931294  743232 command_runner.go:124] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0813 00:13:38.931307  743232 command_runner.go:124] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0813 00:13:38.931316  743232 command_runner.go:124] > # minutes.
	I0813 00:13:38.931326  743232 command_runner.go:124] > stream_tls_cert = ""
	I0813 00:13:38.931336  743232 command_runner.go:124] > # Path to the key file used to serve the encrypted stream. This file can
	I0813 00:13:38.931344  743232 command_runner.go:124] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0813 00:13:38.931349  743232 command_runner.go:124] > stream_tls_key = ""
	I0813 00:13:38.931359  743232 command_runner.go:124] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0813 00:13:38.931370  743232 command_runner.go:124] > # communication with the encrypted stream. This file can change and CRI-O will
	I0813 00:13:38.931378  743232 command_runner.go:124] > # automatically pick up the changes within 5 minutes.
	I0813 00:13:38.931385  743232 command_runner.go:124] > stream_tls_ca = ""
	I0813 00:13:38.931394  743232 command_runner.go:124] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:13:38.931400  743232 command_runner.go:124] > grpc_max_send_msg_size = 16777216
	I0813 00:13:38.931408  743232 command_runner.go:124] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0813 00:13:38.931415  743232 command_runner.go:124] > grpc_max_recv_msg_size = 16777216
	I0813 00:13:38.931422  743232 command_runner.go:124] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0813 00:13:38.931430  743232 command_runner.go:124] > # and options for how to set up and manage the OCI runtime.
	I0813 00:13:38.931434  743232 command_runner.go:124] > [crio.runtime]
	I0813 00:13:38.931442  743232 command_runner.go:124] > # A list of ulimits to be set in containers by default, specified as
	I0813 00:13:38.931450  743232 command_runner.go:124] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0813 00:13:38.931457  743232 command_runner.go:124] > # "nofile=1024:2048"
	I0813 00:13:38.931463  743232 command_runner.go:124] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0813 00:13:38.931469  743232 command_runner.go:124] > #default_ulimits = [
	I0813 00:13:38.931473  743232 command_runner.go:124] > #]
	I0813 00:13:38.931482  743232 command_runner.go:124] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0813 00:13:38.931487  743232 command_runner.go:124] > no_pivot = false
	I0813 00:13:38.931498  743232 command_runner.go:124] > # decryption_keys_path is the path where the keys required for
	I0813 00:13:38.931512  743232 command_runner.go:124] > # image decryption are stored. This option supports live configuration reload.
	I0813 00:13:38.931519  743232 command_runner.go:124] > decryption_keys_path = "/etc/crio/keys/"
	I0813 00:13:38.931525  743232 command_runner.go:124] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0813 00:13:38.931532  743232 command_runner.go:124] > # Will be searched for using $PATH if empty.
	I0813 00:13:38.931536  743232 command_runner.go:124] > conmon = ""
	I0813 00:13:38.931540  743232 command_runner.go:124] > # Cgroup setting for conmon
	I0813 00:13:38.931547  743232 command_runner.go:124] > conmon_cgroup = "system.slice"
	I0813 00:13:38.931554  743232 command_runner.go:124] > # Environment variable list for the conmon process, used for passing necessary
	I0813 00:13:38.931564  743232 command_runner.go:124] > # environment variables to conmon or the runtime.
	I0813 00:13:38.931571  743232 command_runner.go:124] > conmon_env = [
	I0813 00:13:38.931577  743232 command_runner.go:124] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0813 00:13:38.931582  743232 command_runner.go:124] > ]
	I0813 00:13:38.931588  743232 command_runner.go:124] > # Additional environment variables to set for all the
	I0813 00:13:38.931596  743232 command_runner.go:124] > # containers. These are overridden if set in the
	I0813 00:13:38.931603  743232 command_runner.go:124] > # container image spec or in the container runtime configuration.
	I0813 00:13:38.931609  743232 command_runner.go:124] > default_env = [
	I0813 00:13:38.931612  743232 command_runner.go:124] > ]
	I0813 00:13:38.931617  743232 command_runner.go:124] > # If true, SELinux will be used for pod separation on the host.
	I0813 00:13:38.931623  743232 command_runner.go:124] > selinux = false
	I0813 00:13:38.931631  743232 command_runner.go:124] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0813 00:13:38.931639  743232 command_runner.go:124] > # for the runtime. If not specified, then the internal default seccomp profile
	I0813 00:13:38.931648  743232 command_runner.go:124] > # will be used. This option supports live configuration reload.
	I0813 00:13:38.931655  743232 command_runner.go:124] > seccomp_profile = ""
	I0813 00:13:38.931661  743232 command_runner.go:124] > # Changes the meaning of an empty seccomp profile. By default
	I0813 00:13:38.931671  743232 command_runner.go:124] > # (and according to CRI spec), an empty profile means unconfined.
	I0813 00:13:38.931680  743232 command_runner.go:124] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0813 00:13:38.931687  743232 command_runner.go:124] > # which might increase security.
	I0813 00:13:38.931691  743232 command_runner.go:124] > seccomp_use_default_when_empty = false
	I0813 00:13:38.931700  743232 command_runner.go:124] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0813 00:13:38.931709  743232 command_runner.go:124] > # profile name is "crio-default". This profile only takes effect if the user
	I0813 00:13:38.931716  743232 command_runner.go:124] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0813 00:13:38.931725  743232 command_runner.go:124] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0813 00:13:38.931732  743232 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:13:38.931737  743232 command_runner.go:124] > apparmor_profile = "crio-default"
	I0813 00:13:38.931746  743232 command_runner.go:124] > # Used to change irqbalance service config file path which is used for configuring
	I0813 00:13:38.931754  743232 command_runner.go:124] > # irqbalance daemon.
	I0813 00:13:38.931770  743232 command_runner.go:124] > irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0813 00:13:38.931778  743232 command_runner.go:124] > # Cgroup management implementation used for the runtime.
	I0813 00:13:38.931782  743232 command_runner.go:124] > cgroup_manager = "systemd"
	I0813 00:13:38.931788  743232 command_runner.go:124] > # Specify whether the image pull must be performed in a separate cgroup.
	I0813 00:13:38.931795  743232 command_runner.go:124] > separate_pull_cgroup = ""
	I0813 00:13:38.931802  743232 command_runner.go:124] > # List of default capabilities for containers. If it is empty or commented out,
	I0813 00:13:38.931810  743232 command_runner.go:124] > # only the capabilities defined in the containers json file by the user/kube
	I0813 00:13:38.931816  743232 command_runner.go:124] > # will be added.
	I0813 00:13:38.931820  743232 command_runner.go:124] > default_capabilities = [
	I0813 00:13:38.931826  743232 command_runner.go:124] > 	"CHOWN",
	I0813 00:13:38.931830  743232 command_runner.go:124] > 	"DAC_OVERRIDE",
	I0813 00:13:38.931837  743232 command_runner.go:124] > 	"FSETID",
	I0813 00:13:38.931841  743232 command_runner.go:124] > 	"FOWNER",
	I0813 00:13:38.931847  743232 command_runner.go:124] > 	"SETGID",
	I0813 00:13:38.931850  743232 command_runner.go:124] > 	"SETUID",
	I0813 00:13:38.931856  743232 command_runner.go:124] > 	"SETPCAP",
	I0813 00:13:38.931860  743232 command_runner.go:124] > 	"NET_BIND_SERVICE",
	I0813 00:13:38.931866  743232 command_runner.go:124] > 	"KILL",
	I0813 00:13:38.931869  743232 command_runner.go:124] > ]
	I0813 00:13:38.931878  743232 command_runner.go:124] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0813 00:13:38.931887  743232 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:13:38.931891  743232 command_runner.go:124] > default_sysctls = [
	I0813 00:13:38.931896  743232 command_runner.go:124] > ]
	I0813 00:13:38.931902  743232 command_runner.go:124] > # List of additional devices. specified as
	I0813 00:13:38.931911  743232 command_runner.go:124] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0813 00:13:38.931921  743232 command_runner.go:124] > #If it is empty or commented out, only the devices
	I0813 00:13:38.931930  743232 command_runner.go:124] > # defined in the container json file by the user/kube will be added.
	I0813 00:13:38.931937  743232 command_runner.go:124] > additional_devices = [
	I0813 00:13:38.931940  743232 command_runner.go:124] > ]
	I0813 00:13:38.931947  743232 command_runner.go:124] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0813 00:13:38.931955  743232 command_runner.go:124] > # directories does not exist, then CRI-O will automatically skip them.
	I0813 00:13:38.931963  743232 command_runner.go:124] > hooks_dir = [
	I0813 00:13:38.931968  743232 command_runner.go:124] > 	"/usr/share/containers/oci/hooks.d",
	I0813 00:13:38.931973  743232 command_runner.go:124] > ]
	I0813 00:13:38.931980  743232 command_runner.go:124] > # Path to the file specifying the defaults mounts for each container. The
	I0813 00:13:38.931986  743232 command_runner.go:124] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0813 00:13:38.931994  743232 command_runner.go:124] > # its default mounts from the following two files:
	I0813 00:13:38.931998  743232 command_runner.go:124] > #
	I0813 00:13:38.932004  743232 command_runner.go:124] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0813 00:13:38.932013  743232 command_runner.go:124] > #      override file, where users can either add in their own default mounts, or
	I0813 00:13:38.932021  743232 command_runner.go:124] > #      override the default mounts shipped with the package.
	I0813 00:13:38.932026  743232 command_runner.go:124] > #
	I0813 00:13:38.932033  743232 command_runner.go:124] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0813 00:13:38.932053  743232 command_runner.go:124] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0813 00:13:38.932064  743232 command_runner.go:124] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0813 00:13:38.932072  743232 command_runner.go:124] > #      only add mounts it finds in this file.
	I0813 00:13:38.932075  743232 command_runner.go:124] > #
	I0813 00:13:38.932079  743232 command_runner.go:124] > #default_mounts_file = ""
	I0813 00:13:38.932085  743232 command_runner.go:124] > # Maximum number of processes allowed in a container.
	I0813 00:13:38.932093  743232 command_runner.go:124] > pids_limit = 1024
	I0813 00:13:38.932103  743232 command_runner.go:124] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0813 00:13:38.932112  743232 command_runner.go:124] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0813 00:13:38.932121  743232 command_runner.go:124] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0813 00:13:38.932129  743232 command_runner.go:124] > # limit is never exceeded.
	I0813 00:13:38.932135  743232 command_runner.go:124] > log_size_max = -1
	I0813 00:13:38.932159  743232 command_runner.go:124] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0813 00:13:38.932170  743232 command_runner.go:124] > log_to_journald = false
	I0813 00:13:38.932176  743232 command_runner.go:124] > # Path to directory in which container exit files are written to by conmon.
	I0813 00:13:38.932181  743232 command_runner.go:124] > container_exits_dir = "/var/run/crio/exits"
	I0813 00:13:38.932189  743232 command_runner.go:124] > # Path to directory for container attach sockets.
	I0813 00:13:38.932194  743232 command_runner.go:124] > container_attach_socket_dir = "/var/run/crio"
	I0813 00:13:38.932202  743232 command_runner.go:124] > # The prefix to use for the source of the bind mounts.
	I0813 00:13:38.932206  743232 command_runner.go:124] > bind_mount_prefix = ""
	I0813 00:13:38.932215  743232 command_runner.go:124] > # If set to true, all containers will run in read-only mode.
	I0813 00:13:38.932221  743232 command_runner.go:124] > read_only = false
	I0813 00:13:38.932227  743232 command_runner.go:124] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0813 00:13:38.932236  743232 command_runner.go:124] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0813 00:13:38.932243  743232 command_runner.go:124] > # live configuration reload.
	I0813 00:13:38.932246  743232 command_runner.go:124] > log_level = "info"
	I0813 00:13:38.932255  743232 command_runner.go:124] > # Filter the log messages by the provided regular expression.
	I0813 00:13:38.932264  743232 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:13:38.932268  743232 command_runner.go:124] > log_filter = ""
	I0813 00:13:38.932274  743232 command_runner.go:124] > # The UID mappings for the user namespace of each container. A range is
	I0813 00:13:38.932283  743232 command_runner.go:124] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0813 00:13:38.932290  743232 command_runner.go:124] > # separated by comma.
	I0813 00:13:38.932294  743232 command_runner.go:124] > uid_mappings = ""
	I0813 00:13:38.932305  743232 command_runner.go:124] > # The GID mappings for the user namespace of each container. A range is
	I0813 00:13:38.932315  743232 command_runner.go:124] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0813 00:13:38.932321  743232 command_runner.go:124] > # separated by comma.
	I0813 00:13:38.932326  743232 command_runner.go:124] > gid_mappings = ""
	I0813 00:13:38.932336  743232 command_runner.go:124] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0813 00:13:38.932344  743232 command_runner.go:124] > # regarding the proper termination of the container. The lowest possible
	I0813 00:13:38.932352  743232 command_runner.go:124] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0813 00:13:38.932359  743232 command_runner.go:124] > ctr_stop_timeout = 30
	I0813 00:13:38.932365  743232 command_runner.go:124] > # manage_ns_lifecycle determines whether we pin and remove namespaces
	I0813 00:13:38.932371  743232 command_runner.go:124] > # and manage their lifecycle.
	I0813 00:13:38.932378  743232 command_runner.go:124] > # This option is being deprecated, and will be unconditionally true in the future.
	I0813 00:13:38.932385  743232 command_runner.go:124] > manage_ns_lifecycle = true
	I0813 00:13:38.932392  743232 command_runner.go:124] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0813 00:13:38.932400  743232 command_runner.go:124] > # when a pod does not have a private PID namespace, and does not use
	I0813 00:13:38.932408  743232 command_runner.go:124] > # a kernel separating runtime (like kata).
	I0813 00:13:38.932413  743232 command_runner.go:124] > # It requires manage_ns_lifecycle to be true.
	I0813 00:13:38.932419  743232 command_runner.go:124] > drop_infra_ctr = false
	I0813 00:13:38.932425  743232 command_runner.go:124] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0813 00:13:38.932433  743232 command_runner.go:124] > # You can use linux CPU list format to specify desired CPUs.
	I0813 00:13:38.932443  743232 command_runner.go:124] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0813 00:13:38.932447  743232 command_runner.go:124] > # infra_ctr_cpuset = ""
	I0813 00:13:38.932456  743232 command_runner.go:124] > # The directory where the state of the managed namespaces gets tracked.
	I0813 00:13:38.932464  743232 command_runner.go:124] > # Only used when manage_ns_lifecycle is true.
	I0813 00:13:38.932468  743232 command_runner.go:124] > namespaces_dir = "/var/run"
	I0813 00:13:38.932480  743232 command_runner.go:124] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0813 00:13:38.932487  743232 command_runner.go:124] > pinns_path = ""
	I0813 00:13:38.932494  743232 command_runner.go:124] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0813 00:13:38.932503  743232 command_runner.go:124] > # The name is matched against the runtimes map below. If this value is changed,
	I0813 00:13:38.932512  743232 command_runner.go:124] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0813 00:13:38.932519  743232 command_runner.go:124] > default_runtime = "runc"
	I0813 00:13:38.932525  743232 command_runner.go:124] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0813 00:13:38.932535  743232 command_runner.go:124] > # The runtime to use is picked based on the runtime_handler provided by the CRI.
	I0813 00:13:38.932541  743232 command_runner.go:124] > # If no runtime_handler is provided, the runtime will be picked based on the level
	I0813 00:13:38.932553  743232 command_runner.go:124] > # of trust of the workload. Each entry in the table should follow the format:
	I0813 00:13:38.932559  743232 command_runner.go:124] > #
	I0813 00:13:38.932564  743232 command_runner.go:124] > #[crio.runtime.runtimes.runtime-handler]
	I0813 00:13:38.932571  743232 command_runner.go:124] > #  runtime_path = "/path/to/the/executable"
	I0813 00:13:38.932578  743232 command_runner.go:124] > #  runtime_type = "oci"
	I0813 00:13:38.932585  743232 command_runner.go:124] > #  runtime_root = "/path/to/the/root"
	I0813 00:13:38.932589  743232 command_runner.go:124] > #  privileged_without_host_devices = false
	I0813 00:13:38.932596  743232 command_runner.go:124] > #  allowed_annotations = []
	I0813 00:13:38.932599  743232 command_runner.go:124] > # Where:
	I0813 00:13:38.932605  743232 command_runner.go:124] > # - runtime-handler: name used to identify the runtime
	I0813 00:13:38.932613  743232 command_runner.go:124] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0813 00:13:38.932622  743232 command_runner.go:124] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0813 00:13:38.932629  743232 command_runner.go:124] > #   the runtime executable name, and the runtime executable should be placed
	I0813 00:13:38.932635  743232 command_runner.go:124] > #   in $PATH.
	I0813 00:13:38.932642  743232 command_runner.go:124] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0813 00:13:38.932651  743232 command_runner.go:124] > #   omitted, an "oci" runtime is assumed.
	I0813 00:13:38.932664  743232 command_runner.go:124] > # - runtime_root (optional, string): root directory for storage of containers
	I0813 00:13:38.932674  743232 command_runner.go:124] > #   state.
	I0813 00:13:38.932684  743232 command_runner.go:124] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0813 00:13:38.932693  743232 command_runner.go:124] > #   host devices from being passed to privileged containers.
	I0813 00:13:38.932702  743232 command_runner.go:124] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0813 00:13:38.932715  743232 command_runner.go:124] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0813 00:13:38.932723  743232 command_runner.go:124] > #   The currently recognized values are:
	I0813 00:13:38.932729  743232 command_runner.go:124] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0813 00:13:38.932738  743232 command_runner.go:124] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0813 00:13:38.932746  743232 command_runner.go:124] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0813 00:13:38.932753  743232 command_runner.go:124] > [crio.runtime.runtimes.runc]
	I0813 00:13:38.932758  743232 command_runner.go:124] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0813 00:13:38.932768  743232 command_runner.go:124] > runtime_type = "oci"
	I0813 00:13:38.932775  743232 command_runner.go:124] > runtime_root = "/run/runc"
	I0813 00:13:38.932782  743232 command_runner.go:124] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0813 00:13:38.932788  743232 command_runner.go:124] > # running containers
	I0813 00:13:38.932793  743232 command_runner.go:124] > #[crio.runtime.runtimes.crun]
	I0813 00:13:38.932803  743232 command_runner.go:124] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0813 00:13:38.932812  743232 command_runner.go:124] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0813 00:13:38.932821  743232 command_runner.go:124] > # surface and mitigating the consequences of containers breakout.
	I0813 00:13:38.932830  743232 command_runner.go:124] > # Kata Containers with the default configured VMM
	I0813 00:13:38.932838  743232 command_runner.go:124] > #[crio.runtime.runtimes.kata-runtime]
	I0813 00:13:38.932843  743232 command_runner.go:124] > # Kata Containers with the QEMU VMM
	I0813 00:13:38.932852  743232 command_runner.go:124] > #[crio.runtime.runtimes.kata-qemu]
	I0813 00:13:38.932860  743232 command_runner.go:124] > # Kata Containers with the Firecracker VMM
	I0813 00:13:38.932864  743232 command_runner.go:124] > #[crio.runtime.runtimes.kata-fc]
	I0813 00:13:38.932873  743232 command_runner.go:124] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0813 00:13:38.932879  743232 command_runner.go:124] > #
	I0813 00:13:38.932885  743232 command_runner.go:124] > # CRI-O reads its configured registries defaults from the system wide
	I0813 00:13:38.932893  743232 command_runner.go:124] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0813 00:13:38.932902  743232 command_runner.go:124] > # you want to modify just CRI-O, you can change the registries configuration in
	I0813 00:13:38.932911  743232 command_runner.go:124] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0813 00:13:38.932919  743232 command_runner.go:124] > # use the system's defaults from /etc/containers/registries.conf.
	I0813 00:13:38.932925  743232 command_runner.go:124] > [crio.image]
	I0813 00:13:38.932932  743232 command_runner.go:124] > # Default transport for pulling images from a remote container storage.
	I0813 00:13:38.932938  743232 command_runner.go:124] > default_transport = "docker://"
	I0813 00:13:38.932945  743232 command_runner.go:124] > # The path to a file containing credentials necessary for pulling images from
	I0813 00:13:38.932953  743232 command_runner.go:124] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:13:38.932957  743232 command_runner.go:124] > global_auth_file = ""
	I0813 00:13:38.932963  743232 command_runner.go:124] > # The image used to instantiate infra containers.
	I0813 00:13:38.932971  743232 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:13:38.932975  743232 command_runner.go:124] > pause_image = "k8s.gcr.io/pause:3.4.1"
	I0813 00:13:38.932985  743232 command_runner.go:124] > # The path to a file containing credentials specific for pulling the pause_image from
	I0813 00:13:38.932993  743232 command_runner.go:124] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0813 00:13:38.933003  743232 command_runner.go:124] > # This option supports live configuration reload.
	I0813 00:13:38.933011  743232 command_runner.go:124] > pause_image_auth_file = ""
	I0813 00:13:38.933017  743232 command_runner.go:124] > # The command to run to have a container stay in the paused state.
	I0813 00:13:38.933027  743232 command_runner.go:124] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0813 00:13:38.933036  743232 command_runner.go:124] > # specified in the pause image. When commented out, it will fallback to the
	I0813 00:13:38.933044  743232 command_runner.go:124] > # default: "/pause". This option supports live configuration reload.
	I0813 00:13:38.933052  743232 command_runner.go:124] > pause_command = "/pause"
	I0813 00:13:38.933058  743232 command_runner.go:124] > # Path to the file which decides what sort of policy we use when deciding
	I0813 00:13:38.933067  743232 command_runner.go:124] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0813 00:13:38.933076  743232 command_runner.go:124] > # this option be used, as the default behavior of using the system-wide default
	I0813 00:13:38.933085  743232 command_runner.go:124] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0813 00:13:38.933093  743232 command_runner.go:124] > # refer to containers-policy.json(5) for more details.
	I0813 00:13:38.933097  743232 command_runner.go:124] > signature_policy = ""
	I0813 00:13:38.933105  743232 command_runner.go:124] > # List of registries to skip TLS verification for pulling images. Please
	I0813 00:13:38.933112  743232 command_runner.go:124] > # consider configuring the registries via /etc/containers/registries.conf before
	I0813 00:13:38.933118  743232 command_runner.go:124] > # changing them here.
	I0813 00:13:38.933123  743232 command_runner.go:124] > #insecure_registries = "[]"
	I0813 00:13:38.933129  743232 command_runner.go:124] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0813 00:13:38.933139  743232 command_runner.go:124] > # ignore; the latter will ignore volumes entirely.
	I0813 00:13:38.933146  743232 command_runner.go:124] > image_volumes = "mkdir"
	I0813 00:13:38.933152  743232 command_runner.go:124] > # List of registries to be used when pulling an unqualified image (e.g.,
	I0813 00:13:38.933160  743232 command_runner.go:124] > # "alpine:latest"). By default, registries is set to "docker.io" for
	I0813 00:13:38.933169  743232 command_runner.go:124] > # compatibility reasons. Depending on your workload and usecase you may add more
	I0813 00:13:38.933179  743232 command_runner.go:124] > # registries (e.g., "quay.io", "registry.fedoraproject.org",
	I0813 00:13:38.933187  743232 command_runner.go:124] > # "registry.opensuse.org", etc.).
	I0813 00:13:38.933190  743232 command_runner.go:124] > #registries = [
	I0813 00:13:38.933196  743232 command_runner.go:124] > # ]
	I0813 00:13:38.933202  743232 command_runner.go:124] > # Temporary directory to use for storing big files
	I0813 00:13:38.933209  743232 command_runner.go:124] > big_files_temporary_dir = ""
	I0813 00:13:38.933215  743232 command_runner.go:124] > # The crio.network table containers settings pertaining to the management of
	I0813 00:13:38.933271  743232 command_runner.go:124] > # CNI plugins.
	I0813 00:13:38.933281  743232 command_runner.go:124] > [crio.network]
	I0813 00:13:38.933290  743232 command_runner.go:124] > # The default CNI network name to be selected. If not set or "", then
	I0813 00:13:38.933299  743232 command_runner.go:124] > # CRI-O will pick-up the first one found in network_dir.
	I0813 00:13:38.933307  743232 command_runner.go:124] > # cni_default_network = "kindnet"
	I0813 00:13:38.933315  743232 command_runner.go:124] > # Path to the directory where CNI configuration files are located.
	I0813 00:13:38.933320  743232 command_runner.go:124] > network_dir = "/etc/cni/net.d/"
	I0813 00:13:38.933326  743232 command_runner.go:124] > # Paths to directories where CNI plugin binaries are located.
	I0813 00:13:38.933334  743232 command_runner.go:124] > plugin_dirs = [
	I0813 00:13:38.933341  743232 command_runner.go:124] > 	"/opt/cni/bin/",
	I0813 00:13:38.933347  743232 command_runner.go:124] > ]
	I0813 00:13:38.933354  743232 command_runner.go:124] > # A necessary configuration for Prometheus based metrics retrieval
	I0813 00:13:38.933360  743232 command_runner.go:124] > [crio.metrics]
	I0813 00:13:38.933365  743232 command_runner.go:124] > # Globally enable or disable metrics support.
	I0813 00:13:38.933371  743232 command_runner.go:124] > enable_metrics = false
	I0813 00:13:38.933377  743232 command_runner.go:124] > # The port on which the metrics server will listen.
	I0813 00:13:38.933383  743232 command_runner.go:124] > metrics_port = 9090
	I0813 00:13:38.933406  743232 command_runner.go:124] > # Local socket path to bind the metrics server to
	I0813 00:13:38.933413  743232 command_runner.go:124] > metrics_socket = ""
	I0813 00:13:38.933490  743232 cni.go:93] Creating CNI manager for ""
	I0813 00:13:38.933503  743232 cni.go:154] 2 nodes found, recommending kindnet
	I0813 00:13:38.933515  743232 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:13:38.933532  743232 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20210813001157-676638 NodeName:multinode-20210813001157-676638-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:13:38.933653  743232 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "multinode-20210813001157-676638-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:13:38.933718  743232 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-20210813001157-676638-m02 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813001157-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 00:13:38.933781  743232 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 00:13:38.941484  743232 command_runner.go:124] > kubeadm
	I0813 00:13:38.941508  743232 command_runner.go:124] > kubectl
	I0813 00:13:38.941512  743232 command_runner.go:124] > kubelet
	I0813 00:13:38.941534  743232 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 00:13:38.941579  743232 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0813 00:13:38.948721  743232 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (566 bytes)
	I0813 00:13:38.961045  743232 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:13:38.973468  743232 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 00:13:38.976415  743232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:13:38.985599  743232 host.go:66] Checking if "multinode-20210813001157-676638" exists ...
	I0813 00:13:38.985856  743232 start.go:241] JoinCluster: &{Name:multinode-20210813001157-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:multinode-20210813001157-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0}
	I0813 00:13:38.985948  743232 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0813 00:13:38.985989  743232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:13:39.026119  743232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa Username:docker}
	I0813 00:13:39.177172  743232 command_runner.go:124] > kubeadm join control-plane.minikube.internal:8443 --token xta4y8.av60xrza2rnchiu6 --discovery-token-ca-cert-hash sha256:168e7adac45e0238c7bd00763c6ed6a04340e722951e8dc79c7dd45687f15171 
	I0813 00:13:39.177294  743232 start.go:262] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 00:13:39.177328  743232 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xta4y8.av60xrza2rnchiu6 --discovery-token-ca-cert-hash sha256:168e7adac45e0238c7bd00763c6ed6a04340e722951e8dc79c7dd45687f15171 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813001157-676638-m02"
	I0813 00:13:39.223097  743232 command_runner.go:124] > [preflight] Running pre-flight checks
	I0813 00:13:39.244564  743232 command_runner.go:124] > [preflight] The system verification failed. Printing the output from the verification:
	I0813 00:13:39.244593  743232 command_runner.go:124] > KERNEL_VERSION: 4.9.0-16-amd64
	I0813 00:13:39.244601  743232 command_runner.go:124] > OS: Linux
	I0813 00:13:39.244608  743232 command_runner.go:124] > CGROUPS_CPU: enabled
	I0813 00:13:39.244617  743232 command_runner.go:124] > CGROUPS_CPUACCT: enabled
	I0813 00:13:39.244625  743232 command_runner.go:124] > CGROUPS_CPUSET: enabled
	I0813 00:13:39.244632  743232 command_runner.go:124] > CGROUPS_DEVICES: enabled
	I0813 00:13:39.244649  743232 command_runner.go:124] > CGROUPS_FREEZER: enabled
	I0813 00:13:39.244659  743232 command_runner.go:124] > CGROUPS_MEMORY: enabled
	I0813 00:13:39.244668  743232 command_runner.go:124] > CGROUPS_PIDS: enabled
	I0813 00:13:39.244675  743232 command_runner.go:124] > CGROUPS_HUGETLB: missing
	I0813 00:13:39.343946  743232 command_runner.go:124] > [preflight] Reading configuration from the cluster...
	I0813 00:13:39.343975  743232 command_runner.go:124] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0813 00:13:39.370471  743232 command_runner.go:124] > [kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 5]
	I0813 00:13:39.372614  743232 command_runner.go:124] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 00:13:39.372716  743232 command_runner.go:124] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 00:13:39.372754  743232 command_runner.go:124] > [kubelet-start] Starting the kubelet
	I0813 00:13:39.436298  743232 command_runner.go:124] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0813 00:13:45.456383  743232 command_runner.go:124] > This node has joined the cluster:
	I0813 00:13:45.456414  743232 command_runner.go:124] > * Certificate signing request was sent to apiserver and a response was received.
	I0813 00:13:45.456424  743232 command_runner.go:124] > * The Kubelet was informed of the new secure connection details.
	I0813 00:13:45.456434  743232 command_runner.go:124] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0813 00:13:45.492529  743232 command_runner.go:124] ! 	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	I0813 00:13:45.492569  743232 command_runner.go:124] ! 	[WARNING SystemVerification]: missing optional cgroups: hugetlb
	I0813 00:13:45.492593  743232 command_runner.go:124] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-16-amd64\n", err: exit status 1
	I0813 00:13:45.492609  743232 command_runner.go:124] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 00:13:45.492636  743232 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xta4y8.av60xrza2rnchiu6 --discovery-token-ca-cert-hash sha256:168e7adac45e0238c7bd00763c6ed6a04340e722951e8dc79c7dd45687f15171 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-20210813001157-676638-m02": (6.31528994s)
	I0813 00:13:45.492661  743232 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0813 00:13:45.641878  743232 command_runner.go:124] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0813 00:13:45.641912  743232 start.go:243] JoinCluster complete in 6.656055231s
	I0813 00:13:45.641926  743232 cni.go:93] Creating CNI manager for ""
	I0813 00:13:45.641934  743232 cni.go:154] 2 nodes found, recommending kindnet
	I0813 00:13:45.641984  743232 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 00:13:45.645422  743232 command_runner.go:124] >   File: /opt/cni/bin/portmap
	I0813 00:13:45.645456  743232 command_runner.go:124] >   Size: 2738488   	Blocks: 5352       IO Block: 4096   regular file
	I0813 00:13:45.645465  743232 command_runner.go:124] > Device: 801h/2049d	Inode: 3807833     Links: 1
	I0813 00:13:45.645475  743232 command_runner.go:124] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0813 00:13:45.645483  743232 command_runner.go:124] > Access: 2021-02-10 15:18:15.000000000 +0000
	I0813 00:13:45.645492  743232 command_runner.go:124] > Modify: 2021-02-10 15:18:15.000000000 +0000
	I0813 00:13:45.645504  743232 command_runner.go:124] > Change: 2021-07-02 14:50:00.997696388 +0000
	I0813 00:13:45.645513  743232 command_runner.go:124] >  Birth: -
	I0813 00:13:45.645564  743232 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 00:13:45.645575  743232 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 00:13:45.659654  743232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 00:13:45.840858  743232 command_runner.go:124] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0813 00:13:45.843020  743232 command_runner.go:124] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0813 00:13:45.845194  743232 command_runner.go:124] > serviceaccount/kindnet unchanged
	I0813 00:13:45.855620  743232 command_runner.go:124] > daemonset.apps/kindnet configured
	I0813 00:13:45.859675  743232 start.go:226] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.21.3 ControlPlane:false Worker:true}
	I0813 00:13:45.862131  743232 out.go:177] * Verifying Kubernetes components...
	I0813 00:13:45.862208  743232 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:13:45.872913  743232 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:13:45.873187  743232 kapi.go:59] client config for multinode-20210813001157-676638: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001157-676638/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/multinode-20210813001
157-676638/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:13:45.874527  743232 node_ready.go:35] waiting up to 6m0s for node "multinode-20210813001157-676638-m02" to be "Ready" ...
	I0813 00:13:45.874615  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:45.874632  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:45.874638  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:45.874644  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:45.877101  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:45.877122  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:45.877128  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:45 GMT
	I0813 00:13:45.877133  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:45.877137  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:45.877141  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:45.877145  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:45.877265  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"567","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5466 chars]
	I0813 00:13:46.377884  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:46.377916  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:46.377922  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:46.377926  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:46.380282  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:46.380303  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:46.380309  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:46.380312  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:46.380316  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:46.380319  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:46.380322  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:46 GMT
	I0813 00:13:46.380411  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"567","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5466 chars]
	I0813 00:13:46.877957  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:46.877992  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:46.877998  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:46.878002  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:46.880359  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:46.880386  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:46.880393  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:46.880398  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:46.880403  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:46.880408  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:46.880412  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:46 GMT
	I0813 00:13:46.880535  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"567","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5466 chars]
	I0813 00:13:47.378140  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:47.378164  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:47.378173  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:47.378178  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:47.380687  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:47.380709  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:47.380716  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:47.380721  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:47.380726  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:47.380731  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:47 GMT
	I0813 00:13:47.380736  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:47.380837  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"567","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5466 chars]
	I0813 00:13:47.877723  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:47.877749  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:47.877755  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:47.877759  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:47.880111  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:47.880149  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:47.880157  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:47.880162  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:47.880167  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:47.880171  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:47 GMT
	I0813 00:13:47.880176  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:47.880296  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"567","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5466 chars]
	I0813 00:13:47.880636  743232 node_ready.go:58] node "multinode-20210813001157-676638-m02" has status "Ready":"False"
	I0813 00:13:48.377796  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:48.377889  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:48.377909  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:48.377917  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:48.380623  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:48.380649  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:48.380655  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:48.380662  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:48.380665  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:48.380668  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:48 GMT
	I0813 00:13:48.380671  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:48.380868  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"567","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5466 chars]
	I0813 00:13:48.878391  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:48.878422  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:48.878428  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:48.878432  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:48.880964  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:48.880989  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:48.880995  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:48.880999  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:48.881002  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:48.881005  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:48.881009  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:48 GMT
	I0813 00:13:48.881135  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"567","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5466 chars]
	I0813 00:13:49.378294  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:49.378325  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:49.378331  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:49.378336  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:49.381188  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:49.381211  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:49.381217  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:49 GMT
	I0813 00:13:49.381220  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:49.381283  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:49.381287  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:49.381290  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:49.381419  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"567","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"20
21-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{" [truncated 5466 chars]
	I0813 00:13:49.877817  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:49.877845  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:49.877853  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:49.877859  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:49.881447  743232 round_trippers.go:457] Response Status: 200 OK in 3 milliseconds
	I0813 00:13:49.881474  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:49.881481  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:49 GMT
	I0813 00:13:49.881490  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:49.881495  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:49.881499  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:49.881504  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:49.881603  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:49.881867  743232 node_ready.go:58] node "multinode-20210813001157-676638-m02" has status "Ready":"False"
	I0813 00:13:50.378408  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:50.378436  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:50.378442  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:50.378446  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:50.381212  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:50.381296  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:50.381305  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:50.381308  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:50.381311  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:50.381317  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:50.381320  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:50 GMT
	I0813 00:13:50.381421  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:50.878573  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:50.878603  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:50.878610  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:50.878614  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:50.881035  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:50.881059  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:50.881070  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:50.881074  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:50.881080  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:50.881086  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:50.881090  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:50 GMT
	I0813 00:13:50.881273  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:51.377838  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:51.377869  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:51.377875  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:51.377891  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:51.380654  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:51.380680  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:51.380686  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:51.380690  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:51.380693  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:51.380697  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:51.380700  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:51 GMT
	I0813 00:13:51.380815  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:51.878448  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:51.878482  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:51.878490  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:51.878497  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:51.880979  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:51.881010  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:51.881016  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:51.881020  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:51.881026  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:51.881031  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:51.881036  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:51 GMT
	I0813 00:13:51.881168  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:52.378787  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:52.378820  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:52.378826  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:52.378832  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:52.381715  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:52.381743  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:52.381751  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:52.381757  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:52.381762  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:52 GMT
	I0813 00:13:52.381766  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:52.381771  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:52.381870  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:52.382193  743232 node_ready.go:58] node "multinode-20210813001157-676638-m02" has status "Ready":"False"
	I0813 00:13:52.878684  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:52.878710  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:52.878716  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:52.878720  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:52.881004  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:52.881026  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:52.881033  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:52.881038  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:52.881042  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:52.881047  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:52 GMT
	I0813 00:13:52.881051  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:52.881146  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:53.378767  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:53.378799  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:53.378805  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:53.378809  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:53.381434  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:53.381457  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:53.381463  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:53.381466  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:53.381470  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:53 GMT
	I0813 00:13:53.381473  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:53.381476  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:53.381560  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:53.878092  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:53.878121  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:53.878128  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:53.878132  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:53.880557  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:53.880580  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:53.880585  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:53.880589  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:53.880595  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:53.880599  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:53 GMT
	I0813 00:13:53.880604  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:53.880737  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:54.377981  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:54.378011  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:54.378017  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:54.378022  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:54.380675  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:54.380699  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:54.380707  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:54.380711  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:54.380716  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:54.380720  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:54.380724  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:54 GMT
	I0813 00:13:54.380843  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:54.878513  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:54.878551  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:54.878557  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:54.878561  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:54.881036  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:54.881075  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:54.881084  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:54.881091  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:54.881096  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:54.881103  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:54.881107  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:54 GMT
	I0813 00:13:54.881214  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:54.881632  743232 node_ready.go:58] node "multinode-20210813001157-676638-m02" has status "Ready":"False"
	I0813 00:13:55.377736  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:55.377778  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:55.377784  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:55.377789  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:55.380419  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:55.380444  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:55.380451  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:55 GMT
	I0813 00:13:55.380455  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:55.380458  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:55.380461  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:55.380466  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:55.380581  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"585","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 5575 chars]
	I0813 00:13:55.878114  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:55.878146  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:55.878152  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:55.878156  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:55.880678  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:55.880708  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:55.880716  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:55.880721  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:55.880726  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:55.880731  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:55.880736  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:55 GMT
	I0813 00:13:55.880857  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"593","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 5768 chars]
	I0813 00:13:55.881114  743232 node_ready.go:49] node "multinode-20210813001157-676638-m02" has status "Ready":"True"
	I0813 00:13:55.881133  743232 node_ready.go:38] duration metric: took 10.006583056s waiting for node "multinode-20210813001157-676638-m02" to be "Ready" ...
	I0813 00:13:55.881144  743232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:13:55.881288  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0813 00:13:55.881300  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:55.881305  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:55.881310  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:55.887087  743232 round_trippers.go:457] Response Status: 200 OK in 5 milliseconds
	I0813 00:13:55.887121  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:55.887137  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:55.887142  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:55.887147  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:55.887152  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:55.887156  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:55 GMT
	I0813 00:13:55.887755  743232 request.go:1123] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"594"},"items":[{"metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"527","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller"
:{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k: [truncated 68348 chars]
	I0813 00:13:55.889339  743232 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:55.889427  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-n8vmn
	I0813 00:13:55.889437  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:55.889444  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:55.889448  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:55.891655  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:55.891675  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:55.891681  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:55.891685  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:55.891689  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:55 GMT
	I0813 00:13:55.891692  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:55.891696  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:55.891869  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-558bd4d5db-n8vmn","generateName":"coredns-558bd4d5db-","namespace":"kube-system","uid":"8c6390c7-df4c-4cd1-9668-f97565c6ff6c","resourceVersion":"527","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"558bd4d5db"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-558bd4d5db","uid":"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ba6bfa1-67f2-44f2-bcb1-368a8c18c42a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":
{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:imag [truncated 5736 chars]
	I0813 00:13:55.892231  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:55.892246  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:55.892252  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:55.892256  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:55.894447  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:55.894473  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:55.894481  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:55.894486  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:55.894491  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:55.894496  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:55.894501  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:55 GMT
	I0813 00:13:55.894614  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:55.894908  743232 pod_ready.go:92] pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:55.894922  743232 pod_ready.go:81] duration metric: took 5.55841ms waiting for pod "coredns-558bd4d5db-n8vmn" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:55.894933  743232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:55.894994  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20210813001157-676638
	I0813 00:13:55.895003  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:55.895008  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:55.895015  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:55.897003  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:55.897023  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:55.897029  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:55.897032  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:55 GMT
	I0813 00:13:55.897036  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:55.897039  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:55.897043  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:55.897121  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20210813001157-676638","namespace":"kube-system","uid":"7bd171f8-9ba7-465d-8320-82ca9b0fe38b","resourceVersion":"487","creationTimestamp":"2021-08-13T00:12:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"8dfde7453a6ad04def13ca08d3dd1846","kubernetes.io/config.mirror":"8dfde7453a6ad04def13ca08d3dd1846","kubernetes.io/config.seen":"2021-08-13T00:12:29.316613009Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/etcd.advertise-client-urls":{},"f:kubernetes.io/config.has [truncated 5564 chars]
	I0813 00:13:55.897474  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:55.897488  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:55.897493  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:55.897497  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:55.899335  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:55.899356  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:55.899363  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:55.899368  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:55.899373  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:55.899377  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:55.899382  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:55 GMT
	I0813 00:13:55.899474  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:55.899736  743232 pod_ready.go:92] pod "etcd-multinode-20210813001157-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:55.899749  743232 pod_ready.go:81] duration metric: took 4.810173ms waiting for pod "etcd-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:55.899764  743232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:55.899814  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20210813001157-676638
	I0813 00:13:55.899822  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:55.899826  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:55.899830  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:55.901680  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:55.901700  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:55.901706  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:55.901709  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:55.901713  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:55 GMT
	I0813 00:13:55.901715  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:55.901718  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:55.901833  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20210813001157-676638","namespace":"kube-system","uid":"5ccefdb1-48ae-4825-ab83-3c233583f503","resourceVersion":"340","creationTimestamp":"2021-08-13T00:12:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"3509319e0214f60b63092919a691f0e6","kubernetes.io/config.mirror":"3509319e0214f60b63092919a691f0e6","kubernetes.io/config.seen":"2021-08-13T00:12:29.316635432Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:kubeadm.kubernetes.io/kube-apiserver.advertise-addres [truncated 8091 chars]
	I0813 00:13:55.902254  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:55.902270  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:55.902277  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:55.902286  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:55.904065  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:55.904083  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:55.904090  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:55.904099  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:55.904103  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:55.904108  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:55.904113  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:55 GMT
	I0813 00:13:55.904261  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:55.904532  743232 pod_ready.go:92] pod "kube-apiserver-multinode-20210813001157-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:55.904545  743232 pod_ready.go:81] duration metric: took 4.770643ms waiting for pod "kube-apiserver-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:55.904556  743232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:55.904603  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20210813001157-676638
	I0813 00:13:55.904612  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:55.904616  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:55.904620  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:55.906288  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:55.906308  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:55.906314  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:55.906319  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:55.906323  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:55.906328  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:55.906332  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:55 GMT
	I0813 00:13:55.906540  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20210813001157-676638","namespace":"kube-system","uid":"9799e08d-2fec-43b7-b6f7-fecee59f7bfe","resourceVersion":"331","creationTimestamp":"2021-08-13T00:12:29Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fce4605954dd6767ca408495896d3089","kubernetes.io/config.mirror":"fce4605954dd6767ca408495896d3089","kubernetes.io/config.seen":"2021-08-13T00:12:29.316636751Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con
fig.mirror":{},"f:kubernetes.io/config.seen":{},"f:kubernetes.io/config [truncated 7657 chars]
	I0813 00:13:55.906923  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:55.906938  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:55.906942  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:55.906947  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:55.908612  743232 round_trippers.go:457] Response Status: 200 OK in 1 milliseconds
	I0813 00:13:55.908629  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:55.908635  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:55.908638  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:55.908641  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:55.908645  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:55.908648  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:55 GMT
	I0813 00:13:55.908738  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:55.908955  743232 pod_ready.go:92] pod "kube-controller-manager-multinode-20210813001157-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:55.908966  743232 pod_ready.go:81] duration metric: took 4.402499ms waiting for pod "kube-controller-manager-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:55.908975  743232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ljdws" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:56.078332  743232 request.go:600] Waited for 169.26707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ljdws
	I0813 00:13:56.078406  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ljdws
	I0813 00:13:56.078412  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:56.078418  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:56.078422  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:56.081057  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:56.081091  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:56.081097  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:56.081101  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:56.081105  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:56.081108  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:56.081111  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:56 GMT
	I0813 00:13:56.081366  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ljdws","generateName":"kube-proxy-","namespace":"kube-system","uid":"bc71239a-9dae-4c72-ab5e-2d165f5c5c28","resourceVersion":"579","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"31bf8065-dab9-4025-90c4-cbefb4e70b3f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31bf8065-dab9-4025-90c4-cbefb4e70b3f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5764 chars]
	I0813 00:13:56.279179  743232 request.go:600] Waited for 197.416784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:56.279273  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638-m02
	I0813 00:13:56.279280  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:56.279287  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:56.279293  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:56.282079  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:56.282110  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:56.282120  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:56.282125  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:56.282129  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:56 GMT
	I0813 00:13:56.282134  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:56.282139  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:56.282275  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638-m02","uid":"20e6ff69-4a57-4301-8d5b-99bda945a078","resourceVersion":"593","creationTimestamp":"2021-08-13T00:13:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:13:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 5768 chars]
	I0813 00:13:56.282549  743232 pod_ready.go:92] pod "kube-proxy-ljdws" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:56.282562  743232 pod_ready.go:81] duration metric: took 373.580799ms waiting for pod "kube-proxy-ljdws" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:56.282572  743232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mkg5f" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:56.479009  743232 request.go:600] Waited for 196.363375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkg5f
	I0813 00:13:56.479078  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkg5f
	I0813 00:13:56.479085  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:56.479093  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:56.479104  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:56.481752  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:56.481782  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:56.481788  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:56.481792  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:56.481795  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:56.481798  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:56.481801  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:56 GMT
	I0813 00:13:56.481903  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mkg5f","generateName":"kube-proxy-","namespace":"kube-system","uid":"c0ce1ac6-2a65-4491-b750-c72877628ba1","resourceVersion":"482","creationTimestamp":"2021-08-13T00:12:40Z","labels":{"controller-revision-hash":"7cdcb64568","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"31bf8065-dab9-4025-90c4-cbefb4e70b3f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"31bf8065-dab9-4025-90c4-cbefb4e70b3f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller
":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:affinity":{".": [truncated 5756 chars]
	I0813 00:13:56.678546  743232 request.go:600] Waited for 196.234804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:56.678618  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:56.678626  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:56.678631  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:56.678637  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:56.681154  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:56.681202  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:56.681211  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:56 GMT
	I0813 00:13:56.681216  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:56.681289  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:56.681296  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:56.681300  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:56.681409  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:56.681696  743232 pod_ready.go:92] pod "kube-proxy-mkg5f" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:56.681712  743232 pod_ready.go:81] duration metric: took 399.134101ms waiting for pod "kube-proxy-mkg5f" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:56.681723  743232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:56.879200  743232 request.go:600] Waited for 197.38165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813001157-676638
	I0813 00:13:56.879286  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20210813001157-676638
	I0813 00:13:56.879293  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:56.879299  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:56.879303  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:56.881755  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:56.881784  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:56.881792  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:56.881797  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:56.881802  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:56.881807  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:56.881812  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:56 GMT
	I0813 00:13:56.881916  743232 request.go:1123] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20210813001157-676638","namespace":"kube-system","uid":"e9679488-3572-4ab9-bd26-84259c8744e1","resourceVersion":"329","creationTimestamp":"2021-08-13T00:12:29Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f00a7319b0df0d51bb2b1da342fbbf3","kubernetes.io/config.mirror":"8f00a7319b0df0d51bb2b1da342fbbf3","kubernetes.io/config.seen":"2021-08-13T00:12:29.316637767Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-08-13T00:12:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:
kubernetes.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:la [truncated 4539 chars]
	I0813 00:13:57.078628  743232 request.go:600] Waited for 196.360829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:57.078712  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20210813001157-676638
	I0813 00:13:57.078719  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:57.078724  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:57.078729  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:57.081185  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:57.081284  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:57.081297  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:57.081302  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:57.081307  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:57.081314  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:57.081319  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:57 GMT
	I0813 00:13:57.081504  743232 request.go:1123] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manag
er":"kubeadm","operation":"Update","apiVersion":"v1","time":"2021-08-13 [truncated 6604 chars]
	I0813 00:13:57.082139  743232 pod_ready.go:92] pod "kube-scheduler-multinode-20210813001157-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:13:57.082202  743232 pod_ready.go:81] duration metric: took 400.469071ms waiting for pod "kube-scheduler-multinode-20210813001157-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:13:57.082246  743232 pod_ready.go:38] duration metric: took 1.201076838s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:13:57.082343  743232 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 00:13:57.082439  743232 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:13:57.093414  743232 system_svc.go:56] duration metric: took 11.065876ms WaitForService to wait for kubelet.
	I0813 00:13:57.093439  743232 kubeadm.go:547] duration metric: took 11.233719732s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 00:13:57.093466  743232 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:13:57.279103  743232 request.go:600] Waited for 185.533258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0813 00:13:57.279170  743232 round_trippers.go:432] GET https://192.168.49.2:8443/api/v1/nodes
	I0813 00:13:57.279176  743232 round_trippers.go:438] Request Headers:
	I0813 00:13:57.279181  743232 round_trippers.go:442]     Accept: application/json, */*
	I0813 00:13:57.279185  743232 round_trippers.go:442]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0813 00:13:57.281864  743232 round_trippers.go:457] Response Status: 200 OK in 2 milliseconds
	I0813 00:13:57.281892  743232 round_trippers.go:460] Response Headers:
	I0813 00:13:57.281898  743232 round_trippers.go:463]     Cache-Control: no-cache, private
	I0813 00:13:57.281902  743232 round_trippers.go:463]     Content-Type: application/json
	I0813 00:13:57.281905  743232 round_trippers.go:463]     X-Kubernetes-Pf-Flowschema-Uid: 1ba289bd-ba86-4737-93c5-44d5242878d0
	I0813 00:13:57.281910  743232 round_trippers.go:463]     X-Kubernetes-Pf-Prioritylevel-Uid: cbec03d5-1ed8-4ec0-a36a-5e138374669d
	I0813 00:13:57.281913  743232 round_trippers.go:463]     Date: Fri, 13 Aug 2021 00:13:57 GMT
	I0813 00:13:57.282062  743232 request.go:1123] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"595"},"items":[{"metadata":{"name":"multinode-20210813001157-676638","uid":"b30f0490-b23e-4af1-bf1a-ee7cf4d747fb","resourceVersion":"384","creationTimestamp":"2021-08-13T00:12:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20210813001157-676638","kubernetes.io/os":"linux","minikube.k8s.io/commit":"dc1c3ca26e9449ce488a773126b8450402c94a19","minikube.k8s.io/name":"multinode-20210813001157-676638","minikube.k8s.io/updated_at":"2021_08_13T00_12_24_0700","minikube.k8s.io/version":"v1.22.0","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-mana
ged-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","opera [truncated 13417 chars]
	I0813 00:13:57.282444  743232 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 00:13:57.282461  743232 node_conditions.go:123] node cpu capacity is 8
	I0813 00:13:57.282473  743232 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 00:13:57.282482  743232 node_conditions.go:123] node cpu capacity is 8
	I0813 00:13:57.282487  743232 node_conditions.go:105] duration metric: took 189.015933ms to run NodePressure ...
	I0813 00:13:57.282501  743232 start.go:231] waiting for startup goroutines ...
	I0813 00:13:57.325962  743232 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 00:13:57.328781  743232 out.go:177] * Done! kubectl is now configured to use "multinode-20210813001157-676638" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 00:12:00 UTC, end at Fri 2021-08-13 00:14:32 UTC. --
	Aug 13 00:13:28 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:28.812459346Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.0],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61 k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e],Size_:42585056,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c674d4ef-3d4d-4d6d-b6a4-8132c9101243 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:13:28 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:28.813509746Z" level=info msg="Creating container: kube-system/coredns-558bd4d5db-n8vmn/coredns" id=fcbd82aa-25d0-4907-89e7-3e16eed9d1ae name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:13:28 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:28.827592341Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/094717b23cec65ff12bf41b61159af463d6550ead5652b080f09281d4d7bee14/merged/etc/passwd: no such file or directory"
	Aug 13 00:13:28 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:28.827643977Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/094717b23cec65ff12bf41b61159af463d6550ead5652b080f09281d4d7bee14/merged/etc/group: no such file or directory"
	Aug 13 00:13:28 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:28.960698251Z" level=info msg="Created container 861207fec609c5ff08fc4bdddd5e1e7572e7e0d6136fd521e7fcdfe522079e15: kube-system/coredns-558bd4d5db-n8vmn/coredns" id=fcbd82aa-25d0-4907-89e7-3e16eed9d1ae name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:13:28 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:28.961395457Z" level=info msg="Starting container: 861207fec609c5ff08fc4bdddd5e1e7572e7e0d6136fd521e7fcdfe522079e15" id=eeb08c66-03b6-48ef-bbb6-1bdc36d86a3b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:13:28 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:28.971912210Z" level=info msg="Started container 861207fec609c5ff08fc4bdddd5e1e7572e7e0d6136fd521e7fcdfe522079e15: kube-system/coredns-558bd4d5db-n8vmn/coredns" id=eeb08c66-03b6-48ef-bbb6-1bdc36d86a3b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:13:58 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:58.647270751Z" level=info msg="Running pod sandbox: default/busybox-84b6686758-pzxgm/POD" id=444efe02-4d37-4f38-8d80-a0dcba1155c0 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 00:13:58 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:58.662431526Z" level=info msg="Got pod network &{Name:busybox-84b6686758-pzxgm Namespace:default ID:f2c0c56e8d2cc6c6baf193f31fb47d3b3ba73240c5ca43a732755ea7ae9877e6 NetNS:/var/run/netns/5870a522-0545-4445-8127-fee0ad95292e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 13 00:13:58 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:58.662472780Z" level=info msg="About to add CNI network kindnet (type=ptp)"
	Aug 13 00:13:58 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:58.750086715Z" level=info msg="Got pod network &{Name:busybox-84b6686758-pzxgm Namespace:default ID:f2c0c56e8d2cc6c6baf193f31fb47d3b3ba73240c5ca43a732755ea7ae9877e6 NetNS:/var/run/netns/5870a522-0545-4445-8127-fee0ad95292e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
	Aug 13 00:13:58 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:58.750243419Z" level=info msg="About to check CNI network kindnet (type=ptp)"
	Aug 13 00:13:58 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:58.863897662Z" level=info msg="Ran pod sandbox f2c0c56e8d2cc6c6baf193f31fb47d3b3ba73240c5ca43a732755ea7ae9877e6 with infra container: default/busybox-84b6686758-pzxgm/POD" id=444efe02-4d37-4f38-8d80-a0dcba1155c0 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
	Aug 13 00:13:58 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:58.864923917Z" level=info msg="Checking image status: busybox:1.28" id=fec8b0c1-5049-4389-adaf-1c5bf0163f56 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:13:58 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:58.865390943Z" level=info msg="Image busybox:1.28 not found" id=fec8b0c1-5049-4389-adaf-1c5bf0163f56 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:13:58 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:58.865963785Z" level=info msg="Pulling image: busybox:1.28" id=d0306afb-ec9c-4dc8-830d-cde4c04d6b40 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 00:13:58 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:58.872453873Z" level=info msg="Trying to access \"docker.io/library/busybox:1.28\""
	Aug 13 00:13:59 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:59.023042417Z" level=info msg="Trying to access \"docker.io/library/busybox:1.28\""
	Aug 13 00:13:59 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:59.593456759Z" level=info msg="Pulled image: docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47" id=d0306afb-ec9c-4dc8-830d-cde4c04d6b40 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 00:13:59 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:59.594246779Z" level=info msg="Checking image status: busybox:1.28" id=10be684e-fc52-4db4-8180-b12cc8bacca4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:13:59 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:59.594828038Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[docker.io/library/busybox:1.28],RepoDigests:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335],Size_:1365634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=10be684e-fc52-4db4-8180-b12cc8bacca4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:13:59 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:59.595534892Z" level=info msg="Creating container: default/busybox-84b6686758-pzxgm/busybox" id=581a34c9-25fb-4484-9474-695685866bc9 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:13:59 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:59.779282947Z" level=info msg="Created container 361925edfaa0fc5b483a8f4c7c92195ded535a00411beeb48f935b853d926089: default/busybox-84b6686758-pzxgm/busybox" id=581a34c9-25fb-4484-9474-695685866bc9 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:13:59 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:59.779766511Z" level=info msg="Starting container: 361925edfaa0fc5b483a8f4c7c92195ded535a00411beeb48f935b853d926089" id=b890022c-74ce-4954-81e0-87fc63dae70d name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:13:59 multinode-20210813001157-676638 crio[372]: time="2021-08-13 00:13:59.793974360Z" level=info msg="Started container 361925edfaa0fc5b483a8f4c7c92195ded535a00411beeb48f935b853d926089: default/busybox-84b6686758-pzxgm/busybox" id=b890022c-74ce-4954-81e0-87fc63dae70d name=/runtime.v1alpha2.RuntimeService/StartContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID
	361925edfaa0f       docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47   32 seconds ago       Running             busybox                   0                   f2c0c56e8d2cc
	861207fec609c       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899                                    About a minute ago   Running             coredns                   0                   93e2bd011b6af
	be093c2a6b95b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                    About a minute ago   Running             storage-provisioner       0                   572f4196086dd
	6e3fc6ce8a5fa       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92                                    About a minute ago   Running             kube-proxy                0                   ae8a6d3851dc2
	8d295d681cd31       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                    About a minute ago   Running             kindnet-cni               0                   8f16079286860
	3c45a50bc441b       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a                                    2 minutes ago        Running             kube-scheduler            0                   a0777bc763430
	bd19a29a801d2       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9                                    2 minutes ago        Running             kube-controller-manager   0                   2c7407765392b
	dcf681408628d       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934                                    2 minutes ago        Running             etcd                      0                   a7a2bf5e24279
	19dd5e7863c36       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80                                    2 minutes ago        Running             kube-apiserver            0                   360337ef27d46
	
	* 
	* ==> coredns [861207fec609c5ff08fc4bdddd5e1e7572e7e0d6136fd521e7fcdfe522079e15] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20210813001157-676638
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813001157-676638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=multinode-20210813001157-676638
	                    minikube.k8s.io/updated_at=2021_08_13T00_12_24_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 00:12:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813001157-676638
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 00:14:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 00:14:29 +0000   Fri, 13 Aug 2021 00:12:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 00:14:29 +0000   Fri, 13 Aug 2021 00:12:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 00:14:29 +0000   Fri, 13 Aug 2021 00:12:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 00:14:29 +0000   Fri, 13 Aug 2021 00:12:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    multinode-20210813001157-676638
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 760e67beb8554645829f2357c8eb4ae7
	  System UUID:                e8ba6909-3b30-41fd-ad9f-42f83116fab0
	  Boot ID:                    f12e4c71-5c79-4cb7-b9de-5d4c99f61cf1
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-pzxgm                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                 coredns-558bd4d5db-n8vmn                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     112s
	  kube-system                 etcd-multinode-20210813001157-676638                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m3s
	  kube-system                 kindnet-zhxmb                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      112s
	  kube-system                 kube-apiserver-multinode-20210813001157-676638             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-controller-manager-multinode-20210813001157-676638    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-proxy-mkg5f                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-scheduler-multinode-20210813001157-676638             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  2m17s (x5 over 2m17s)  kubelet     Node multinode-20210813001157-676638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s (x5 over 2m17s)  kubelet     Node multinode-20210813001157-676638 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m3s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet     Node multinode-20210813001157-676638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet     Node multinode-20210813001157-676638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s                   kubelet     Node multinode-20210813001157-676638 status is now: NodeHasSufficientPID
	  Normal  NodeReady                113s                   kubelet     Node multinode-20210813001157-676638 status is now: NodeReady
	  Normal  Starting                 111s                   kube-proxy  Starting kube-proxy.
	
	
	Name:               multinode-20210813001157-676638-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20210813001157-676638-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 00:13:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20210813001157-676638-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 00:14:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 00:14:15 +0000   Fri, 13 Aug 2021 00:13:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 00:14:15 +0000   Fri, 13 Aug 2021 00:13:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 00:14:15 +0000   Fri, 13 Aug 2021 00:13:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 00:14:15 +0000   Fri, 13 Aug 2021 00:13:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    multinode-20210813001157-676638-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 760e67beb8554645829f2357c8eb4ae7
	  System UUID:                0b279904-e66d-4a96-95fd-d281a94ff9c8
	  Boot ID:                    f12e4c71-5c79-4cb7-b9de-5d4c99f61cf1
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-84b6686758-j4hzl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                 kindnet-cmrc9               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      47s
	  kube-system                 kube-proxy-ljdws            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 47s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s   kubelet     Node multinode-20210813001157-676638-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s   kubelet     Node multinode-20210813001157-676638-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s   kubelet     Node multinode-20210813001157-676638-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 44s   kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                37s   kubelet     Node multinode-20210813001157-676638-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000004] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff b6 b5 0a f1 56 3d 08 06        ..........V=..
	[Aug13 00:10] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth6ed40699
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff e2 8a 7c 5a 3c 1c 08 06        ........|Z<...
	[ +27.830433] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 00:11] cgroup: cgroup2: unknown option "nsdelegate"
	[ +26.593786] cgroup: cgroup2: unknown option "nsdelegate"
	[ +26.528384] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 00:12] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 32 5d dd 44 a2 2e 08 06        ......2].D....
	[  +0.000006] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 32 5d dd 44 a2 2e 08 06        ......2].D....
	[Aug13 00:13] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 5a 8a 66 1b 4c b3 08 06        ......Z.f.L...
	[ +11.985602] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth360bdf25
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ca fd 21 de b2 15 08 06        ........!.....
	[  +4.630634] cgroup: cgroup2: unknown option "nsdelegate"
	[ +25.439457] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth1411f06f
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ce 3f b0 1e 14 b3 08 06        .......?......
	[Aug13 00:14] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 72 22 bc 8f df 5d 08 06        ......r"...]..
	[  +0.000005] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 72 22 bc 8f df 5d 08 06        ......r"...]..
	[ +18.748147] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev veth0389e832
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8e d0 2e 9f 7e 5b 08 06        ..........~[..
	
	* 
	* ==> etcd [dcf681408628db7adb00d7b422ed5704125ba7e73a657b23b3937e7df6f5cdf0] <==
	* 2021-08-13 00:12:59.080551 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:13:09.081124 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:13:19.080627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:13:29.080421 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:13:39.081249 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:13:49.080691 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:13:59.080779 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:14:09.080486 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:14:10.213858 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.485647978s) to execute
	2021-08-13 00:14:10.213884 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true " with result "range_response_count:0 size:5" took too long (2.121850075s) to execute
	2021-08-13 00:14:10.213921 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1129" took too long (1.16890671s) to execute
	2021-08-13 00:14:11.101150 W | wal: sync duration of 2.02068923s, expected less than 1s
	2021-08-13 00:14:11.101510 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.689780905s) to execute
	2021-08-13 00:14:11.964777 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (2.000119408s) to execute
	WARNING: 2021/08/13 00:14:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2021-08-13 00:14:12.219772 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000087804s) to execute
	2021-08-13 00:14:12.474010 W | wal: sync duration of 1.37265172s, expected less than 1s
	2021-08-13 00:14:12.967910 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (2.837696077s) to execute
	2021-08-13 00:14:12.968020 W | etcdserver: request "header:<ID:8128006928785650294 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-multinode-20210813001157-676638.169ab562fab27920\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-multinode-20210813001157-676638.169ab562fab27920\" value_size:761 lease:8128006928785649909 >> failure:<>>" with result "size:16" took too long (493.653229ms) to execute
	2021-08-13 00:14:12.968121 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (3.05382342s) to execute
	2021-08-13 00:14:12.968376 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (743.79086ms) to execute
	2021-08-13 00:14:12.968472 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (659.212835ms) to execute
	2021-08-13 00:14:12.968574 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:2 size:11480" took too long (1.067320091s) to execute
	2021-08-13 00:14:19.080398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:14:29.080961 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  00:14:32 up  3:57,  0 users,  load average: 1.16, 1.28, 1.97
	Linux multinode-20210813001157-676638 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [19dd5e7863c367cd07166ca8faefc45bad3f141da8e1fd901416ef0375884b40] <==
	* I0813 00:14:09.864921       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:14:09.864929       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 00:14:10.214564       1 trace.go:205] Trace[1620108663]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 00:14:09.044) (total time: 1170ms):
	Trace[1620108663]: ---"About to write a response" 1170ms (00:14:00.214)
	Trace[1620108663]: [1.170125741s] [1.170125741s] END
	I0813 00:14:11.102170       1 trace.go:205] Trace[451782263]: "GuaranteedUpdate etcd3" type:*core.Endpoints (13-Aug-2021 00:14:10.221) (total time: 880ms):
	Trace[451782263]: ---"Transaction committed" 880ms (00:14:00.102)
	Trace[451782263]: [880.904013ms] [880.904013ms] END
	I0813 00:14:11.102330       1 trace.go:205] Trace[297038626]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 00:14:10.220) (total time: 881ms):
	Trace[297038626]: ---"Object stored in database" 881ms (00:14:00.102)
	Trace[297038626]: [881.339458ms] [881.339458ms] END
	I0813 00:14:12.969998       1 trace.go:205] Trace[1072008737]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 00:14:11.966) (total time: 1002ms):
	Trace[1072008737]: ---"Object stored in database" 1002ms (00:14:00.969)
	Trace[1072008737]: [1.002979432s] [1.002979432s] END
	I0813 00:14:12.970147       1 trace.go:205] Trace[1706570281]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (13-Aug-2021 00:14:11.900) (total time: 1069ms):
	Trace[1706570281]: [1.069566438s] [1.069566438s] END
	I0813 00:14:12.970698       1 trace.go:205] Trace[1140170780]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (13-Aug-2021 00:14:11.900) (total time: 1070ms):
	Trace[1140170780]: ---"Listing from storage done" 1069ms (00:14:00.970)
	Trace[1140170780]: [1.070149033s] [1.070149033s] END
	I0813 00:14:12.971262       1 trace.go:205] Trace[1678419266]: "GuaranteedUpdate etcd3" type:*coordination.Lease (13-Aug-2021 00:14:12.031) (total time: 940ms):
	Trace[1678419266]: ---"Transaction committed" 939ms (00:14:00.971)
	Trace[1678419266]: [940.118358ms] [940.118358ms] END
	I0813 00:14:12.971422       1 trace.go:205] Trace[834040150]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-20210813001157-676638,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (13-Aug-2021 00:14:12.030) (total time: 940ms):
	Trace[834040150]: ---"Object stored in database" 940ms (00:14:00.971)
	Trace[834040150]: [940.484124ms] [940.484124ms] END
	
	* 
	* ==> kube-controller-manager [bd19a29a801d29bf4fb0e32935bc5ee2c1db01c9c51d4d25f19308e3068b769d] <==
	* I0813 00:12:39.542880       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 00:12:39.583509       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 00:12:39.633810       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 00:12:39.887911       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 00:12:40.054905       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 00:12:40.055671       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 00:12:40.082887       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 00:12:40.082913       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 00:12:40.098350       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zhxmb"
	I0813 00:12:40.100280       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mkg5f"
	I0813 00:12:40.237549       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-vvdnj"
	I0813 00:12:40.243301       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-n8vmn"
	I0813 00:12:40.258195       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-vvdnj"
	W0813 00:13:45.415861       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20210813001157-676638-m02" does not exist
	I0813 00:13:45.432413       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ljdws"
	I0813 00:13:45.434518       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cmrc9"
	I0813 00:13:45.436766       1 range_allocator.go:373] Set node multinode-20210813001157-676638-m02 PodCIDR to [10.244.1.0/24]
	E0813 00:13:45.446991       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"31bf8065-dab9-4025-90c4-cbefb4e70b3f", ResourceVersion:"483", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764410344, loc:(*time.Location)(0x72ff440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00000ca80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00000cab0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00000cac8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00000cae0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001d02f60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00074bf40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00000cb10), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00000cb40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.21.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001d02fe0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001e08e40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c9d2d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000782150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000cc76e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001c9d328)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	W0813 00:13:49.444354       1 node_lifecycle_controller.go:1013] Missing timestamp for Node multinode-20210813001157-676638-m02. Assuming now as a timestamp.
	I0813 00:13:49.444408       1 event.go:291] "Event occurred" object="multinode-20210813001157-676638-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20210813001157-676638-m02 event: Registered Node multinode-20210813001157-676638-m02 in Controller"
	I0813 00:13:58.331900       1 event.go:291] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-84b6686758 to 2"
	I0813 00:13:58.337072       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-j4hzl"
	I0813 00:13:58.339682       1 event.go:291] "Event occurred" object="default/busybox-84b6686758" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-84b6686758-pzxgm"
	I0813 00:13:59.455337       1 event.go:291] "Event occurred" object="default/busybox-84b6686758-j4hzl" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-84b6686758-j4hzl"
	
	* 
	* ==> kube-proxy [6e3fc6ce8a5fa324053b63552bb616a6750bba4f180fba43226fa7329bedb136] <==
	* I0813 00:12:41.779523       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 00:12:41.779599       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 00:12:41.779651       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 00:12:41.809167       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 00:12:41.809197       1 server_others.go:212] Using iptables Proxier.
	I0813 00:12:41.809207       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 00:12:41.809219       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 00:12:41.809682       1 server.go:643] Version: v1.21.3
	I0813 00:12:41.810221       1 config.go:315] Starting service config controller
	I0813 00:12:41.810253       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 00:12:41.810313       1 config.go:224] Starting endpoint slice config controller
	I0813 00:12:41.810402       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 00:12:41.812388       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 00:12:41.813553       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 00:12:41.910953       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 00:12:41.910982       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [3c45a50bc441ba0c2a6b66387ecb2e7ce7c0020bab15c35de08d7d282591c2ca] <==
	* I0813 00:12:21.410061       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 00:12:21.410334       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 00:12:21.410401       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 00:12:21.412427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 00:12:21.416318       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 00:12:21.416497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 00:12:21.416628       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 00:12:21.416716       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 00:12:21.416731       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 00:12:21.416794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 00:12:21.416837       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 00:12:21.416859       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 00:12:21.416865       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 00:12:21.416935       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 00:12:21.416968       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 00:12:21.416980       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:12:21.489930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 00:12:22.220051       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 00:12:22.307785       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 00:12:22.368553       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 00:12:22.455164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:12:22.491594       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 00:12:22.590963       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 00:12:22.632581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0813 00:12:24.110487       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 00:12:00 UTC, end at Fri 2021-08-13 00:14:33 UTC. --
	Aug 13 00:12:41 multinode-20210813001157-676638 kubelet[1598]: I0813 00:12:41.612396    1598 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:12:41 multinode-20210813001157-676638 kubelet[1598]: I0813 00:12:41.717394    1598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k28rm\" (UniqueName: \"kubernetes.io/projected/5ea28bd2-65e3-48c2-9aad-361791248c9a-kube-api-access-k28rm\") pod \"storage-provisioner\" (UID: \"5ea28bd2-65e3-48c2-9aad-361791248c9a\") "
	Aug 13 00:12:41 multinode-20210813001157-676638 kubelet[1598]: I0813 00:12:41.717494    1598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5ea28bd2-65e3-48c2-9aad-361791248c9a-tmp\") pod \"storage-provisioner\" (UID: \"5ea28bd2-65e3-48c2-9aad-361791248c9a\") "
	Aug 13 00:12:49 multinode-20210813001157-676638 kubelet[1598]: E0813 00:12:49.841443    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:12:51 multinode-20210813001157-676638 kubelet[1598]: E0813 00:12:51.544109    1598 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-n8vmn_kube-system_8c6390c7-df4c-4cd1-9668-f97565c6ff6c_0(f43ea4dfb36386885ffff00d05ae9eca0935a0a643664589e9ce893580313fe7): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 13 00:12:51 multinode-20210813001157-676638 kubelet[1598]: E0813 00:12:51.544221    1598 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-n8vmn_kube-system_8c6390c7-df4c-4cd1-9668-f97565c6ff6c_0(f43ea4dfb36386885ffff00d05ae9eca0935a0a643664589e9ce893580313fe7): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-n8vmn"
	Aug 13 00:12:51 multinode-20210813001157-676638 kubelet[1598]: E0813 00:12:51.544252    1598 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-n8vmn_kube-system_8c6390c7-df4c-4cd1-9668-f97565c6ff6c_0(f43ea4dfb36386885ffff00d05ae9eca0935a0a643664589e9ce893580313fe7): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-n8vmn"
	Aug 13 00:12:51 multinode-20210813001157-676638 kubelet[1598]: E0813 00:12:51.544326    1598 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-n8vmn_kube-system(8c6390c7-df4c-4cd1-9668-f97565c6ff6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-n8vmn_kube-system(8c6390c7-df4c-4cd1-9668-f97565c6ff6c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-n8vmn_kube-system_8c6390c7-df4c-4cd1-9668-f97565c6ff6c_0(f43ea4dfb36386885ffff00d05ae9eca0935a0a643664589e9ce893580313fe7): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-n8vmn" podUID=8c6390c7-df4c-4cd1-9668-f97565c6ff6c
	Aug 13 00:12:59 multinode-20210813001157-676638 kubelet[1598]: E0813 00:12:59.901442    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:13:09 multinode-20210813001157-676638 kubelet[1598]: E0813 00:13:09.960512    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:13:16 multinode-20210813001157-676638 kubelet[1598]: E0813 00:13:16.855633    1598 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-n8vmn_kube-system_8c6390c7-df4c-4cd1-9668-f97565c6ff6c_0(dde4dba721a441f81e3b14b9e9f78d5e15cef9864ad2ac152cd73dc105580efb): failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
	Aug 13 00:13:16 multinode-20210813001157-676638 kubelet[1598]: E0813 00:13:16.855725    1598 kuberuntime_sandbox.go:68] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-n8vmn_kube-system_8c6390c7-df4c-4cd1-9668-f97565c6ff6c_0(dde4dba721a441f81e3b14b9e9f78d5e15cef9864ad2ac152cd73dc105580efb): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-n8vmn"
	Aug 13 00:13:16 multinode-20210813001157-676638 kubelet[1598]: E0813 00:13:16.855760    1598 kuberuntime_manager.go:790] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-n8vmn_kube-system_8c6390c7-df4c-4cd1-9668-f97565c6ff6c_0(dde4dba721a441f81e3b14b9e9f78d5e15cef9864ad2ac152cd73dc105580efb): failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-558bd4d5db-n8vmn"
	Aug 13 00:13:16 multinode-20210813001157-676638 kubelet[1598]: E0813 00:13:16.855861    1598 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-558bd4d5db-n8vmn_kube-system(8c6390c7-df4c-4cd1-9668-f97565c6ff6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-558bd4d5db-n8vmn_kube-system(8c6390c7-df4c-4cd1-9668-f97565c6ff6c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-558bd4d5db-n8vmn_kube-system_8c6390c7-df4c-4cd1-9668-f97565c6ff6c_0(dde4dba721a441f81e3b14b9e9f78d5e15cef9864ad2ac152cd73dc105580efb): failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-558bd4d5db-n8vmn" podUID=8c6390c7-df4c-4cd1-9668-f97565c6ff6c
	Aug 13 00:13:20 multinode-20210813001157-676638 kubelet[1598]: E0813 00:13:20.020119    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:13:30 multinode-20210813001157-676638 kubelet[1598]: E0813 00:13:30.087337    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:13:34 multinode-20210813001157-676638 kubelet[1598]: W0813 00:13:34.795985    1598 container.go:586] Failed to update stats for container "/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80": /sys/fs/cgroup/cpuset/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/cpuset.cpus found to be empty, continuing to push stats
	Aug 13 00:13:40 multinode-20210813001157-676638 kubelet[1598]: E0813 00:13:40.153249    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:13:50 multinode-20210813001157-676638 kubelet[1598]: E0813 00:13:50.221393    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:13:58 multinode-20210813001157-676638 kubelet[1598]: I0813 00:13:58.345692    1598 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:13:58 multinode-20210813001157-676638 kubelet[1598]: I0813 00:13:58.461612    1598 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5dcz\" (UniqueName: \"kubernetes.io/projected/de40b80d-071c-4945-8d25-e4ca32f93204-kube-api-access-f5dcz\") pod \"busybox-84b6686758-pzxgm\" (UID: \"de40b80d-071c-4945-8d25-e4ca32f93204\") "
	Aug 13 00:14:00 multinode-20210813001157-676638 kubelet[1598]: E0813 00:14:00.292477    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:14:10 multinode-20210813001157-676638 kubelet[1598]: E0813 00:14:10.371194    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:14:20 multinode-20210813001157-676638 kubelet[1598]: E0813 00:14:20.447463    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:14:30 multinode-20210813001157-676638 kubelet[1598]: E0813 00:14:30.522106    1598 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80\": RecentStats: unable to find data in memory cache]"
	
	* 
	* ==> storage-provisioner [be093c2a6b95ba394970563cdc3062ddfc2252641da2d9455a2c7777e4700387] <==
	* I0813 00:12:42.535380       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 00:12:42.543442       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 00:12:42.543491       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 00:12:42.550495       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 00:12:42.550662       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20210813001157-676638_cba09b7d-16a2-4708-a6a2-9362e113e0b4!
	I0813 00:12:42.550616       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3221d40e-b3ec-4a6b-acf3-16f6fa58bf0e", APIVersion:"v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20210813001157-676638_cba09b7d-16a2-4708-a6a2-9362e113e0b4 became leader
	I0813 00:12:42.651024       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20210813001157-676638_cba09b7d-16a2-4708-a6a2-9362e113e0b4!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20210813001157-676638 -n multinode-20210813001157-676638
helpers_test.go:262: (dbg) Run:  kubectl --context multinode-20210813001157-676638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context multinode-20210813001157-676638 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context multinode-20210813001157-676638 describe pod : exit status 1 (53.701711ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context multinode-20210813001157-676638 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.75s)

                                                
                                    
x
+
TestPreload (152.69s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813002243-676638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.0
E0813 00:23:10.031735  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:23:21.598474  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:24:33.079653  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813002243-676638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.0: (1m54.447676455s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813002243-676638 -- sudo crictl pull busybox
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210813002243-676638 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210813002243-676638 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio --kubernetes-version=v1.17.3: (31.291631218s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210813002243-676638 -- sudo crictl image ls
preload_test.go:85: Expected to find busybox in output of `docker images`, instead got 
-- stdout --
	IMAGE               TAG                 IMAGE ID            SIZE

                                                
                                                
-- /stdout --
panic.go:613: *** TestPreload FAILED at 2021-08-13 00:25:10.505015522 +0000 UTC m=+1829.117011891
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect test-preload-20210813002243-676638
helpers_test.go:236: (dbg) docker inspect test-preload-20210813002243-676638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ab077cfb67f566d12dda09298608bdcc919b229ace2caf8dd2984819218efcb0",
	        "Created": "2021-08-13T00:22:45.904222465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 802813,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T00:22:46.644169131Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/ab077cfb67f566d12dda09298608bdcc919b229ace2caf8dd2984819218efcb0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ab077cfb67f566d12dda09298608bdcc919b229ace2caf8dd2984819218efcb0/hostname",
	        "HostsPath": "/var/lib/docker/containers/ab077cfb67f566d12dda09298608bdcc919b229ace2caf8dd2984819218efcb0/hosts",
	        "LogPath": "/var/lib/docker/containers/ab077cfb67f566d12dda09298608bdcc919b229ace2caf8dd2984819218efcb0/ab077cfb67f566d12dda09298608bdcc919b229ace2caf8dd2984819218efcb0-json.log",
	        "Name": "/test-preload-20210813002243-676638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20210813002243-676638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20210813002243-676638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5fdbbc4266aaf2f1905c1795381ba92008928753fb291845b55dde9651b09e0c-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5fdbbc4266aaf2f1905c1795381ba92008928753fb291845b55dde9651b09e0c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5fdbbc4266aaf2f1905c1795381ba92008928753fb291845b55dde9651b09e0c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5fdbbc4266aaf2f1905c1795381ba92008928753fb291845b55dde9651b09e0c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20210813002243-676638",
	                "Source": "/var/lib/docker/volumes/test-preload-20210813002243-676638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20210813002243-676638",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20210813002243-676638",
	                "name.minikube.sigs.k8s.io": "test-preload-20210813002243-676638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c36d83fa6b8e19d9951cedf2ac72a879f39f20446d6cc2bc286e53c2b1aa475d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33343"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33342"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33339"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33341"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33340"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c36d83fa6b8e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20210813002243-676638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ab077cfb67f5"
	                    ],
	                    "NetworkID": "84940e73528be4593cef2aa37e17c71111b3c99a09005b204d419e45c67fa3e7",
	                    "EndpointID": "b8da5eb4ad89aff6e5a146dbfa33b38b208c616ad38cd110dee00eb154c149f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-20210813002243-676638 -n test-preload-20210813002243-676638
helpers_test.go:245: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-20210813002243-676638 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p test-preload-20210813002243-676638 logs -n 25: (1.428724009s)
helpers_test.go:253: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                             Args                             |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| kubectl | -p                                                           | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:14:31 UTC | Fri, 13 Aug 2021 00:14:31 UTC |
	|         | multinode-20210813001157-676638                              |                                     |         |         |                               |                               |
	|         | -- exec                                                      |                                     |         |         |                               |                               |
	|         | busybox-84b6686758-pzxgm                                     |                                     |         |         |                               |                               |
	|         | -- sh -c nslookup                                            |                                     |         |         |                               |                               |
	|         | host.minikube.internal | awk                                 |                                     |         |         |                               |                               |
	|         | 'NR==5' | cut -d' ' -f3                                      |                                     |         |         |                               |                               |
	| -p      | multinode-20210813001157-676638                              | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:14:31 UTC | Fri, 13 Aug 2021 00:14:33 UTC |
	|         | logs -n 25                                                   |                                     |         |         |                               |                               |
	| node    | add -p                                                       | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:14:34 UTC | Fri, 13 Aug 2021 00:15:00 UTC |
	|         | multinode-20210813001157-676638                              |                                     |         |         |                               |                               |
	|         | -v 3 --alsologtostderr                                       |                                     |         |         |                               |                               |
	| profile | list --output json                                           | minikube                            | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:15:00 UTC | Fri, 13 Aug 2021 00:15:01 UTC |
	| -p      | multinode-20210813001157-676638                              | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:15:02 UTC | Fri, 13 Aug 2021 00:15:02 UTC |
	|         | cp testdata/cp-test.txt                                      |                                     |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                     |                                     |         |         |                               |                               |
	| -p      | multinode-20210813001157-676638                              | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:15:02 UTC | Fri, 13 Aug 2021 00:15:02 UTC |
	|         | ssh sudo cat                                                 |                                     |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                     |                                     |         |         |                               |                               |
	| -p      | multinode-20210813001157-676638 cp testdata/cp-test.txt      | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:15:02 UTC | Fri, 13 Aug 2021 00:15:02 UTC |
	|         | multinode-20210813001157-676638-m02:/home/docker/cp-test.txt |                                     |         |         |                               |                               |
	| -p      | multinode-20210813001157-676638                              | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:15:02 UTC | Fri, 13 Aug 2021 00:15:03 UTC |
	|         | ssh -n                                                       |                                     |         |         |                               |                               |
	|         | multinode-20210813001157-676638-m02                          |                                     |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                            |                                     |         |         |                               |                               |
	| -p      | multinode-20210813001157-676638 cp testdata/cp-test.txt      | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:15:03 UTC | Fri, 13 Aug 2021 00:15:03 UTC |
	|         | multinode-20210813001157-676638-m03:/home/docker/cp-test.txt |                                     |         |         |                               |                               |
	| -p      | multinode-20210813001157-676638                              | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:15:03 UTC | Fri, 13 Aug 2021 00:15:03 UTC |
	|         | ssh -n                                                       |                                     |         |         |                               |                               |
	|         | multinode-20210813001157-676638-m03                          |                                     |         |         |                               |                               |
	|         | sudo cat /home/docker/cp-test.txt                            |                                     |         |         |                               |                               |
	| -p      | multinode-20210813001157-676638                              | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:15:03 UTC | Fri, 13 Aug 2021 00:15:05 UTC |
	|         | node stop m03                                                |                                     |         |         |                               |                               |
	| -p      | multinode-20210813001157-676638                              | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:15:06 UTC | Fri, 13 Aug 2021 00:15:37 UTC |
	|         | node start m03                                               |                                     |         |         |                               |                               |
	|         | --alsologtostderr                                            |                                     |         |         |                               |                               |
	| stop    | -p                                                           | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:15:38 UTC | Fri, 13 Aug 2021 00:16:21 UTC |
	|         | multinode-20210813001157-676638                              |                                     |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:16:21 UTC | Fri, 13 Aug 2021 00:18:22 UTC |
	|         | multinode-20210813001157-676638                              |                                     |         |         |                               |                               |
	|         | --wait=true -v=8                                             |                                     |         |         |                               |                               |
	|         | --alsologtostderr                                            |                                     |         |         |                               |                               |
	| -p      | multinode-20210813001157-676638                              | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:18:22 UTC | Fri, 13 Aug 2021 00:18:27 UTC |
	|         | node delete m03                                              |                                     |         |         |                               |                               |
	| -p      | multinode-20210813001157-676638                              | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:18:28 UTC | Fri, 13 Aug 2021 00:19:09 UTC |
	|         | stop                                                         |                                     |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:19:09 UTC | Fri, 13 Aug 2021 00:20:18 UTC |
	|         | multinode-20210813001157-676638                              |                                     |         |         |                               |                               |
	|         | --wait=true -v=8                                             |                                     |         |         |                               |                               |
	|         | --alsologtostderr                                            |                                     |         |         |                               |                               |
	|         | --driver=docker                                              |                                     |         |         |                               |                               |
	|         | --container-runtime=crio                                     |                                     |         |         |                               |                               |
	| start   | -p                                                           | multinode-20210813001157-676638-m03 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:20:19 UTC | Fri, 13 Aug 2021 00:20:52 UTC |
	|         | multinode-20210813001157-676638-m03                          |                                     |         |         |                               |                               |
	|         | --driver=docker                                              |                                     |         |         |                               |                               |
	|         | --container-runtime=crio                                     |                                     |         |         |                               |                               |
	| delete  | -p                                                           | multinode-20210813001157-676638-m03 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:20:52 UTC | Fri, 13 Aug 2021 00:20:55 UTC |
	|         | multinode-20210813001157-676638-m03                          |                                     |         |         |                               |                               |
	| -p      | multinode-20210813001157-676638                              | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:20:56 UTC | Fri, 13 Aug 2021 00:20:57 UTC |
	|         | logs -n 25                                                   |                                     |         |         |                               |                               |
	| delete  | -p                                                           | multinode-20210813001157-676638     | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:20:57 UTC | Fri, 13 Aug 2021 00:21:03 UTC |
	|         | multinode-20210813001157-676638                              |                                     |         |         |                               |                               |
	| start   | -p                                                           | test-preload-20210813002243-676638  | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:22:43 UTC | Fri, 13 Aug 2021 00:24:37 UTC |
	|         | test-preload-20210813002243-676638                           |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                              |                                     |         |         |                               |                               |
	|         | --wait=true --preload=false                                  |                                     |         |         |                               |                               |
	|         | --driver=docker                                              |                                     |         |         |                               |                               |
	|         | --container-runtime=crio                                     |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0                                 |                                     |         |         |                               |                               |
	| ssh     | -p                                                           | test-preload-20210813002243-676638  | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:24:37 UTC | Fri, 13 Aug 2021 00:24:38 UTC |
	|         | test-preload-20210813002243-676638                           |                                     |         |         |                               |                               |
	|         | -- sudo crictl pull busybox                                  |                                     |         |         |                               |                               |
	| start   | -p                                                           | test-preload-20210813002243-676638  | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:24:38 UTC | Fri, 13 Aug 2021 00:25:10 UTC |
	|         | test-preload-20210813002243-676638                           |                                     |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                              |                                     |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=docker                             |                                     |         |         |                               |                               |
	|         |  --container-runtime=crio                                    |                                     |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3                                 |                                     |         |         |                               |                               |
	| ssh     | -p                                                           | test-preload-20210813002243-676638  | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:25:10 UTC | Fri, 13 Aug 2021 00:25:10 UTC |
	|         | test-preload-20210813002243-676638                           |                                     |         |         |                               |                               |
	|         | -- sudo crictl image ls                                      |                                     |         |         |                               |                               |
	|---------|--------------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 00:24:38
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 00:24:38.964272  807704 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:24:38.964374  807704 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:24:38.964379  807704 out.go:311] Setting ErrFile to fd 2...
	I0813 00:24:38.964384  807704 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:24:38.964501  807704 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:24:38.964837  807704 out.go:305] Setting JSON to false
	I0813 00:24:39.005308  807704 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":14841,"bootTime":1628799438,"procs":224,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:24:39.005472  807704 start.go:121] virtualization: kvm guest
	I0813 00:24:39.008380  807704 out.go:177] * [test-preload-20210813002243-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:24:39.010148  807704 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:24:39.008585  807704 notify.go:169] Checking for updates...
	I0813 00:24:39.011689  807704 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:24:39.013286  807704 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:24:39.014916  807704 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:24:39.017910  807704 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0813 00:24:39.017963  807704 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:24:39.071472  807704 docker.go:132] docker version: linux-19.03.15
	I0813 00:24:39.071620  807704 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:24:39.160231  807704 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2021-08-13 00:24:39.109786514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:24:39.160319  807704 docker.go:244] overlay module found
	I0813 00:24:39.162460  807704 out.go:177] * Using the docker driver based on existing profile
	I0813 00:24:39.162496  807704 start.go:278] selected driver: docker
	I0813 00:24:39.162503  807704 start.go:751] validating driver "docker" against &{Name:test-preload-20210813002243-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20210813002243-676638 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:24:39.162671  807704 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 00:24:39.162772  807704 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:24:39.162805  807704 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 00:24:39.164410  807704 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:24:39.165392  807704 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:24:39.253864  807704 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2021-08-13 00:24:39.204407614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 00:24:39.254050  807704 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:24:39.254132  807704 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 00:24:39.256415  807704 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:24:39.256566  807704 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 00:24:39.256604  807704 cni.go:93] Creating CNI manager for ""
	I0813 00:24:39.256613  807704 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 00:24:39.256623  807704 start_flags.go:277] config:
	{Name:test-preload-20210813002243-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210813002243-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:24:39.258868  807704 out.go:177] * Starting control plane node test-preload-20210813002243-676638 in cluster test-preload-20210813002243-676638
	I0813 00:24:39.258926  807704 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 00:24:39.260561  807704 out.go:177] * Pulling base image ...
	I0813 00:24:39.260594  807704 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0813 00:24:39.260683  807704 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	W0813 00:24:39.297877  807704 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.17.3-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0813 00:24:39.298132  807704 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638/config.json ...
	I0813 00:24:39.298190  807704 cache.go:108] acquiring lock: {Name:mke4eebc5ad8ba944ce6e19ec1f570e4c6b965df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:24:39.298188  807704 cache.go:108] acquiring lock: {Name:mkebf5e5183b3fe1832480b10a0767c0216ef0fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:24:39.298190  807704 cache.go:108] acquiring lock: {Name:mk4a45d8efed11b108ca59365945e4b9ab923c03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:24:39.298342  807704 cache.go:108] acquiring lock: {Name:mk7e91618de7d10682336f6b0a703a9b3cc6d7c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:24:39.298367  807704 cache.go:108] acquiring lock: {Name:mke5c01a1cb1a73be4dfd2a0da96ad9988a0cf44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:24:39.298384  807704 cache.go:108] acquiring lock: {Name:mk083d4b02aa9d0f9be1d05f8e6c4194cbc5f200 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:24:39.298436  807704 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 exists
	I0813 00:24:39.298454  807704 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0813 00:24:39.298462  807704 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" took 280.243µs
	I0813 00:24:39.298483  807704 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 succeeded
	I0813 00:24:39.298479  807704 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 161.133µs
	I0813 00:24:39.298487  807704 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 00:24:39.298500  807704 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0813 00:24:39.298408  807704 cache.go:108] acquiring lock: {Name:mkd01ce055fc376d1e00625138b7b37ece1a1361 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:24:39.298510  807704 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0813 00:24:39.298513  807704 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 337.863µs
	I0813 00:24:39.298524  807704 cache.go:97] cache image "k8s.gcr.io/pause:3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 346.575µs
	I0813 00:24:39.298530  807704 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 00:24:39.298535  807704 cache.go:81] save to tar file k8s.gcr.io/pause:3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0813 00:24:39.298480  807704 cache.go:108] acquiring lock: {Name:mk68e9c9f14f7fc2f472892f42aea146dad2efc7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:24:39.298511  807704 cache.go:108] acquiring lock: {Name:mk06bf272dc62ce82f85725e8c0f3248e25ef66d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:24:39.298546  807704 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 00:24:39.298536  807704 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 00:24:39.298553  807704 cache.go:108] acquiring lock: {Name:mk5544900b2ada83cf50062296a864aae1257425 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:24:39.298536  807704 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 00:24:39.298570  807704 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 243.626µs
	I0813 00:24:39.298587  807704 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 00:24:39.298561  807704 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 156.773µs
	I0813 00:24:39.298600  807704 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 00:24:39.298637  807704 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 00:24:39.298671  807704 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 00:24:39.298718  807704 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 00:24:39.299578  807704 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:24:39.299607  807704 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:24:39.299581  807704 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:24:39.299735  807704 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:24:39.353654  807704 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 00:24:39.353696  807704 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 00:24:39.353720  807704 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:24:39.353766  807704 start.go:313] acquiring machines lock for test-preload-20210813002243-676638: {Name:mk9a9ad5e0a818ffb1e8e34b6113c89f53e10bd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:24:39.353874  807704 start.go:317] acquired machines lock for "test-preload-20210813002243-676638" in 86.414µs
	I0813 00:24:39.353902  807704 start.go:93] Skipping create...Using existing machine configuration
	I0813 00:24:39.353913  807704 fix.go:55] fixHost starting: 
	I0813 00:24:39.354170  807704 cli_runner.go:115] Run: docker container inspect test-preload-20210813002243-676638 --format={{.State.Status}}
	I0813 00:24:39.395637  807704 fix.go:108] recreateIfNeeded on test-preload-20210813002243-676638: state=Running err=<nil>
	W0813 00:24:39.395667  807704 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 00:24:39.398284  807704 out.go:177] * Updating the running docker "test-preload-20210813002243-676638" container ...
	I0813 00:24:39.398330  807704 machine.go:88] provisioning docker machine ...
	I0813 00:24:39.398357  807704 ubuntu.go:169] provisioning hostname "test-preload-20210813002243-676638"
	I0813 00:24:39.398429  807704 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813002243-676638
	I0813 00:24:39.440888  807704 main.go:130] libmachine: Using SSH client type: native
	I0813 00:24:39.441134  807704 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33343 <nil> <nil>}
	I0813 00:24:39.441152  807704 main.go:130] libmachine: About to run SSH command:
	sudo hostname test-preload-20210813002243-676638 && echo "test-preload-20210813002243-676638" | sudo tee /etc/hostname
	I0813 00:24:39.567068  807704 main.go:130] libmachine: SSH cmd err, output: <nil>: test-preload-20210813002243-676638
	
	I0813 00:24:39.567175  807704 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813002243-676638
	I0813 00:24:39.610656  807704 main.go:130] libmachine: Using SSH client type: native
	I0813 00:24:39.611065  807704 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33343 <nil> <nil>}
	I0813 00:24:39.611099  807704 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20210813002243-676638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20210813002243-676638/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20210813002243-676638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:24:39.726104  807704 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:24:39.726136  807704 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:24:39.726180  807704 ubuntu.go:177] setting up certificates
	I0813 00:24:39.726193  807704 provision.go:83] configureAuth start
	I0813 00:24:39.726258  807704 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20210813002243-676638
	I0813 00:24:39.771349  807704 provision.go:137] copyHostCerts
	I0813 00:24:39.771430  807704 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:24:39.771442  807704 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:24:39.771503  807704 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1082 bytes)
	I0813 00:24:39.771616  807704 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:24:39.771632  807704 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:24:39.771655  807704 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:24:39.771734  807704 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:24:39.771742  807704 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:24:39.771761  807704 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0813 00:24:39.771815  807704 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.test-preload-20210813002243-676638 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20210813002243-676638]
	I0813 00:24:39.924694  807704 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0813 00:24:39.925439  807704 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0813 00:24:39.927858  807704 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0813 00:24:39.929328  807704 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0813 00:24:40.048590  807704 provision.go:171] copyRemoteCerts
	I0813 00:24:40.048664  807704 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:24:40.048706  807704 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813002243-676638
	I0813 00:24:40.099454  807704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33343 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813002243-676638/id_rsa Username:docker}
	I0813 00:24:40.187570  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 00:24:40.207382  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0813 00:24:40.228278  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 00:24:40.248868  807704 provision.go:86] duration metric: configureAuth took 522.65401ms
	I0813 00:24:40.248898  807704 ubuntu.go:193] setting minikube options for container-runtime
	I0813 00:24:40.249298  807704 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813002243-676638
	I0813 00:24:40.296446  807704 main.go:130] libmachine: Using SSH client type: native
	I0813 00:24:40.296657  807704 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33343 <nil> <nil>}
	I0813 00:24:40.296691  807704 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:24:40.746163  807704 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 exists
	I0813 00:24:40.746252  807704 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3" took 1.447772643s
	I0813 00:24:40.746278  807704 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 succeeded
	I0813 00:24:40.885921  807704 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 exists
	I0813 00:24:40.885982  807704 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3" took 1.587470375s
	I0813 00:24:40.886002  807704 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 succeeded
	I0813 00:24:40.895241  807704 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 exists
	I0813 00:24:40.895289  807704 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3" took 1.596925923s
	I0813 00:24:40.895303  807704 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 succeeded
	I0813 00:24:41.011122  807704 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:24:41.011159  807704 machine.go:91] provisioned docker machine in 1.612819896s
	I0813 00:24:41.011176  807704 start.go:267] post-start starting for "test-preload-20210813002243-676638" (driver="docker")
	I0813 00:24:41.011183  807704 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:24:41.011242  807704 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:24:41.011279  807704 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813002243-676638
	I0813 00:24:41.055034  807704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33343 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813002243-676638/id_rsa Username:docker}
	I0813 00:24:41.092596  807704 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 exists
	I0813 00:24:41.092654  807704 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.17.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3" took 1.794102409s
	I0813 00:24:41.092669  807704 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.17.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 succeeded
	I0813 00:24:41.092688  807704 cache.go:88] Successfully saved all images to host disk.
	I0813 00:24:41.142419  807704 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:24:41.145832  807704 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 00:24:41.145856  807704 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 00:24:41.145865  807704 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 00:24:41.145871  807704 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 00:24:41.145881  807704 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:24:41.145929  807704 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:24:41.146018  807704 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> 6766382.pem in /etc/ssl/certs
	I0813 00:24:41.146114  807704 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:24:41.153456  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:24:41.171256  807704 start.go:270] post-start completed in 160.062575ms
	I0813 00:24:41.171323  807704 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:24:41.171360  807704 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813002243-676638
	I0813 00:24:41.215952  807704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33343 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813002243-676638/id_rsa Username:docker}
	I0813 00:24:41.298738  807704 fix.go:57] fixHost completed within 1.944815762s
	I0813 00:24:41.298768  807704 start.go:80] releasing machines lock for "test-preload-20210813002243-676638", held for 1.944881203s
	I0813 00:24:41.298852  807704 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20210813002243-676638
	I0813 00:24:41.341163  807704 ssh_runner.go:149] Run: systemctl --version
	I0813 00:24:41.341175  807704 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:24:41.341256  807704 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813002243-676638
	I0813 00:24:41.341341  807704 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813002243-676638
	I0813 00:24:41.384201  807704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33343 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813002243-676638/id_rsa Username:docker}
	I0813 00:24:41.384303  807704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33343 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813002243-676638/id_rsa Username:docker}
	I0813 00:24:41.503448  807704 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:24:41.514823  807704 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:24:41.525570  807704 docker.go:153] disabling docker service ...
	I0813 00:24:41.525625  807704 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:24:41.535725  807704 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:24:41.545453  807704 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:24:41.668409  807704 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:24:41.784137  807704 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:24:41.793868  807704 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:24:41.807093  807704 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.1"|' -i /etc/crio/crio.conf"
	I0813 00:24:41.815669  807704 crio.go:66] Updating CRIO to use the custom CNI network "kindnet"
	I0813 00:24:41.815705  807704 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
	I0813 00:24:41.825274  807704 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:24:41.832168  807704 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:24:41.832220  807704 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:24:41.839748  807704 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:24:41.846382  807704 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:24:41.957562  807704 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:24:41.967261  807704 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:24:41.967335  807704 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:24:41.970705  807704 start.go:417] Will wait 60s for crictl version
	I0813 00:24:41.970760  807704 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:24:41.998932  807704 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 00:24:41.999024  807704 ssh_runner.go:149] Run: crio --version
	I0813 00:24:42.063421  807704 ssh_runner.go:149] Run: crio --version
	I0813 00:24:42.131498  807704 out.go:177] * Preparing Kubernetes v1.17.3 on CRI-O 1.20.3 ...
	I0813 00:24:42.131589  807704 cli_runner.go:115] Run: docker network inspect test-preload-20210813002243-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 00:24:42.171126  807704 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 00:24:42.174813  807704 preload.go:131] Checking if preload exists for k8s version v1.17.3 and runtime crio
	I0813 00:24:42.174854  807704 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:24:42.206751  807704 crio.go:420] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.17.3". assuming images are not preloaded.
	I0813 00:24:42.206778  807704 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.3 k8s.gcr.io/kube-controller-manager:v1.17.3 k8s.gcr.io/kube-scheduler:v1.17.3 k8s.gcr.io/kube-proxy:v1.17.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0813 00:24:42.206851  807704 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 00:24:42.206863  807704 image.go:133] retrieving image: k8s.gcr.io/pause:3.1
	I0813 00:24:42.206887  807704 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 00:24:42.206908  807704 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 00:24:42.206943  807704 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 00:24:42.206964  807704 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 00:24:42.206979  807704 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 00:24:42.206991  807704 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0813 00:24:42.207087  807704 image.go:133] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0813 00:24:42.207101  807704 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0813 00:24:42.208879  807704 image.go:175] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:24:42.208905  807704 image.go:175] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:24:42.208931  807704 image.go:175] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:24:42.209471  807704 image.go:175] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
	I0813 00:24:42.216943  807704 image.go:171] found k8s.gcr.io/pause:3.1 locally: &{Image:0xc000a5c220}
	I0813 00:24:42.217040  807704 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0813 00:24:42.581246  807704 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{Image:0xc000f78080}
	I0813 00:24:42.581374  807704 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 00:24:42.712276  807704 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{Image:0xc000200300}
	I0813 00:24:42.712386  807704 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0813 00:24:42.733312  807704 image.go:171] found k8s.gcr.io/coredns:1.6.5 locally: &{Image:0xc0002007a0}
	I0813 00:24:42.733409  807704 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0813 00:24:42.932611  807704 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 00:24:42.933193  807704 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.3
	I0813 00:24:42.936978  807704 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 00:24:42.962038  807704 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 00:24:43.095852  807704 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.17.3" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.3" does not exist at hash "b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302" in container runtime
	I0813 00:24:43.095917  807704 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.17.3" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.3" does not exist at hash "ae853e93800dc2572aeb425e5765cf9b25212bfc43695299e61dece06cffa4a1" in container runtime
	I0813 00:24:43.095960  807704 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.17.3
	I0813 00:24:43.095984  807704 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 00:24:43.096014  807704 ssh_runner.go:149] Run: which crictl
	I0813 00:24:43.096041  807704 ssh_runner.go:149] Run: which crictl
	I0813 00:24:43.096052  807704 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.17.3" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.3" does not exist at hash "d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad" in container runtime
	I0813 00:24:43.096072  807704 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 00:24:43.096111  807704 ssh_runner.go:149] Run: which crictl
	I0813 00:24:43.096117  807704 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.17.3" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.3" does not exist at hash "90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b" in container runtime
	I0813 00:24:43.096156  807704 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 00:24:43.096215  807704 ssh_runner.go:149] Run: which crictl
	I0813 00:24:43.099835  807704 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.17.3
	I0813 00:24:43.102666  807704 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.17.3
	I0813 00:24:43.102796  807704 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.17.3
	I0813 00:24:43.102849  807704 ssh_runner.go:149] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.17.3
	I0813 00:24:43.133067  807704 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3
	I0813 00:24:43.133170  807704 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 00:24:43.141445  807704 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3
	I0813 00:24:43.141532  807704 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3
	I0813 00:24:43.141563  807704 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 00:24:43.141617  807704 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 00:24:43.146469  807704 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.3': No such file or directory
	I0813 00:24:43.146483  807704 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3
	I0813 00:24:43.146509  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 --> /var/lib/minikube/images/kube-scheduler_v1.17.3 (33822208 bytes)
	I0813 00:24:43.146559  807704 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 00:24:43.192163  807704 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.3': No such file or directory
	I0813 00:24:43.192205  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 --> /var/lib/minikube/images/kube-controller-manager_v1.17.3 (48810496 bytes)
	I0813 00:24:43.192218  807704 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.3': No such file or directory
	I0813 00:24:43.192232  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 --> /var/lib/minikube/images/kube-apiserver_v1.17.3 (50635776 bytes)
	I0813 00:24:43.192176  807704 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.17.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.3': No such file or directory
	I0813 00:24:43.192269  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 --> /var/lib/minikube/images/kube-proxy_v1.17.3 (48706048 bytes)
	I0813 00:24:43.471550  807704 crio.go:191] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 00:24:43.471633  807704 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3
	I0813 00:24:44.942882  807704 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{Image:0xc000f78140}
	I0813 00:24:44.943000  807704 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.1.0
	I0813 00:24:45.371524  807704 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.17.3: (1.899843857s)
	I0813 00:24:45.371560  807704 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 from cache
	I0813 00:24:45.371588  807704 crio.go:191] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 00:24:45.371644  807704 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3
	I0813 00:24:45.669277  807704 image.go:171] found k8s.gcr.io/etcd:3.4.3-0 locally: &{Image:0xc000f780a0}
	I0813 00:24:45.669389  807704 ssh_runner.go:149] Run: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0813 00:24:48.429504  807704 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.17.3: (3.057830027s)
	I0813 00:24:48.429542  807704 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 from cache
	I0813 00:24:48.429573  807704 crio.go:191] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 00:24:48.429582  807704 ssh_runner.go:189] Completed: sudo podman image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0: (2.760164256s)
	I0813 00:24:48.429631  807704 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3
	I0813 00:24:51.681913  807704 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.17.3: (3.252243735s)
	I0813 00:24:51.681942  807704 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 from cache
	I0813 00:24:51.681967  807704 crio.go:191] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 00:24:51.682016  807704 ssh_runner.go:149] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3
	I0813 00:24:53.437099  807704 ssh_runner.go:189] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.17.3: (1.755053942s)
	I0813 00:24:53.437138  807704 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 from cache
	I0813 00:24:53.437164  807704 cache_images.go:113] Successfully loaded all cached images
	I0813 00:24:53.437170  807704 cache_images.go:82] LoadImages completed in 11.230379212s
	I0813 00:24:53.437295  807704 ssh_runner.go:149] Run: crio config
	I0813 00:24:53.510134  807704 cni.go:93] Creating CNI manager for ""
	I0813 00:24:53.510159  807704 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 00:24:53.510172  807704 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:24:53.510186  807704 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.17.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20210813002243-676638 NodeName:test-preload-20210813002243-676638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:24:53.510356  807704 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "test-preload-20210813002243-676638"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:24:53.510456  807704 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=test-preload-20210813002243-676638 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210813002243-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 00:24:53.510511  807704 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.17.3
	I0813 00:24:53.518491  807704 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.3': No such file or directory
	
	Initiating transfer...
	I0813 00:24:53.518567  807704 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.3
	I0813 00:24:53.526679  807704 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubelet
	I0813 00:24:53.526680  807704 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubeadm
	I0813 00:24:53.526679  807704 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubectl
	I0813 00:24:53.946843  807704 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl
	I0813 00:24:53.951288  807704 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubectl': No such file or directory
	I0813 00:24:53.951334  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubectl --> /var/lib/minikube/binaries/v1.17.3/kubectl (43499520 bytes)
	I0813 00:24:53.976608  807704 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm
	I0813 00:24:53.987933  807704 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubeadm': No such file or directory
	I0813 00:24:53.987977  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubeadm --> /var/lib/minikube/binaries/v1.17.3/kubeadm (39346176 bytes)
	I0813 00:24:54.316582  807704 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:24:54.327364  807704 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0813 00:24:54.342237  807704 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet
	I0813 00:24:54.345880  807704 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.17.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.3/kubelet': No such file or directory
	I0813 00:24:54.345925  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.17.3/kubelet --> /var/lib/minikube/binaries/v1.17.3/kubelet (111584792 bytes)
	I0813 00:24:54.548617  807704 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 00:24:54.555570  807704 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (565 bytes)
	I0813 00:24:54.568048  807704 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:24:54.580724  807704 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0813 00:24:54.594092  807704 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 00:24:54.597677  807704 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638 for IP: 192.168.49.2
	I0813 00:24:54.597735  807704 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:24:54.597749  807704 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:24:54.597804  807704 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638/client.key
	I0813 00:24:54.597825  807704 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638/apiserver.key.dd3b5fb2
	I0813 00:24:54.597848  807704 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638/proxy-client.key
	I0813 00:24:54.597946  807704 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem (1338 bytes)
	W0813 00:24:54.598005  807704 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638_empty.pem, impossibly tiny 0 bytes
	I0813 00:24:54.598018  807704 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 00:24:54.598054  807704 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1082 bytes)
	I0813 00:24:54.598089  807704 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:24:54.598117  807704 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
	I0813 00:24:54.598211  807704 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:24:54.599339  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 00:24:54.616969  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 00:24:54.634047  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 00:24:54.652145  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 00:24:54.670277  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:24:54.689501  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:24:54.707975  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:24:54.726155  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:24:54.743992  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem --> /usr/share/ca-certificates/676638.pem (1338 bytes)
	I0813 00:24:54.761690  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /usr/share/ca-certificates/6766382.pem (1708 bytes)
	I0813 00:24:54.781449  807704 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:24:54.799379  807704 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 00:24:54.812202  807704 ssh_runner.go:149] Run: openssl version
	I0813 00:24:54.817346  807704 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/676638.pem && ln -fs /usr/share/ca-certificates/676638.pem /etc/ssl/certs/676638.pem"
	I0813 00:24:54.825549  807704 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/676638.pem
	I0813 00:24:54.829055  807704 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 00:05 /usr/share/ca-certificates/676638.pem
	I0813 00:24:54.829109  807704 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/676638.pem
	I0813 00:24:54.834597  807704 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/676638.pem /etc/ssl/certs/51391683.0"
	I0813 00:24:54.841911  807704 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6766382.pem && ln -fs /usr/share/ca-certificates/6766382.pem /etc/ssl/certs/6766382.pem"
	I0813 00:24:54.849631  807704 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6766382.pem
	I0813 00:24:54.852775  807704 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 00:05 /usr/share/ca-certificates/6766382.pem
	I0813 00:24:54.852830  807704 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6766382.pem
	I0813 00:24:54.857894  807704 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6766382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:24:54.864830  807704 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:24:54.872087  807704 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:24:54.875097  807704 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:55 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:24:54.875156  807704 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:24:54.880089  807704 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:24:54.886886  807704 kubeadm.go:390] StartCluster: {Name:test-preload-20210813002243-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:test-preload-20210813002243-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:24:54.887019  807704 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 00:24:54.887060  807704 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 00:24:54.912309  807704 cri.go:76] found id: "1b3b5683f4a9ff053524367e87259b91d3dc153e616745696f2011092fb1d1df"
	I0813 00:24:54.912336  807704 cri.go:76] found id: "2e4137d0d5a8e2c1060d443f99b58a2aceff313d1cd64d742636611ae3b4e1a2"
	I0813 00:24:54.912341  807704 cri.go:76] found id: "ced11d540ac880d0f874487cbfd54fa7498fcc42b4cc0fb1c1e8e9c2bf31abea"
	I0813 00:24:54.912348  807704 cri.go:76] found id: "e62240b4305b268ea5c6bf1f0fc86a635a19354c96d3cf79311eca79dfda55ca"
	I0813 00:24:54.912353  807704 cri.go:76] found id: "099efe330404d5566ed4ca3f92b52774a0906317bf729a003a34c8636522b6fc"
	I0813 00:24:54.912359  807704 cri.go:76] found id: "93238e6cef49886ac2a89e6d443b6d4f401750c66ca00e40674719670485ae18"
	I0813 00:24:54.912365  807704 cri.go:76] found id: "a6144a75d08798b823d12d57d635a39524409da42b1abc5e902f8bae12d391e9"
	I0813 00:24:54.912371  807704 cri.go:76] found id: "2a5a4a41062eaa52c37a212a8b205aef44a432dae42172939cdfe77658bf13de"
	I0813 00:24:54.912377  807704 cri.go:76] found id: ""
	I0813 00:24:54.912420  807704 ssh_runner.go:149] Run: sudo runc list -f json
	I0813 00:24:54.953502  807704 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc","pid":2655,"status":"running","bundle":"/run/containers/storage/overlay-containers/06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc/userdata","rootfs":"/var/lib/containers/storage/overlay/d0c1e970fe35650a471390ac528247384fcc35348f06cbf23183172d668cdf59/merged","created":"2021-08-13T00:23:43.541531378Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"kubernetes.io/config.seen\":\"2021-08-13T00:23:42.351518107Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-test-preload-202108130022
43-676638_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:23:43.455288782Z","io.kubernetes.cri-o.HostName":"test-preload-20210813002243-676638","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-scheduler-test-preload-20210813002243-676638","io.kubernetes.cri-o.Labels":"{\"component\":\"kube-scheduler\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-20210813002243-676638\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210813002243-676638_bb577061a17ad23cfb
bf52e9419bf32a/06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-test-preload-20210813002243-676638\",\"uid\":\"bb577061a17ad23cfbbf52e9419bf32a\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d0c1e970fe35650a471390ac528247384fcc35348f06cbf23183172d668cdf59/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-test-preload-20210813002243-676638_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21
d1b125fc","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210813002243-676638","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-13T00:23:42.351518107Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"099efe330404d5566ed4ca3f92b52774a0906317bf729a003a34c8636522b6fc","pid":2810,"status":"running","bundle":"/run/containers/storage/overlay-containers/099efe330404d5566ed4ca3f92b52774a0906317bf729a003a34c8636522b6fc/userdata","rootfs":"/var/lib/containers/storage/overlay/a8ec9b0a2de6f19308b595286abcdc4f81d2013a21b682dba747fe128964b3f7/merged","cr
eated":"2021-08-13T00:23:43.945584705Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ffc41559","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ffc41559\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"099efe330404d5566ed4ca3f92b52774a0906317bf729a003a34c8636522b6fc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:23:43.72975826Z","io.kubernetes.cri-o.Image":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube
-apiserver:v1.17.0","io.kubernetes.cri-o.ImageRef":"0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210813002243-676638\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f8c1872d6958c845ffffb18f158fd9df\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210813002243-676638_f8c1872d6958c845ffffb18f158fd9df/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a8ec9b0a2de6f19308b595286abcdc4f81d2013a21b682dba747fe128964b3f7/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-test-preload-20210813002243-676638_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c0ed23ec9f5bde6d89a59994134c2e4c1645373
9fdae950be06d2e67faf1d7a3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-test-preload-20210813002243-676638_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f8c1872d6958c845ffffb18f158fd9df/containers/kube-apiserver/905d5df8\",\"readonly\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f8c1872d6958c845ffffb18f158fd9df/etc-hosts\",\"readonly\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/et
c/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210813002243-676638","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.hash":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.seen":"2021-08-13T00:23:42.351511589Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53","pid":2663,"status":"running","bundle":"/run/containers/storage/overlay-containers/1234ecb4bec235cdb9
4a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53/userdata","rootfs":"/var/lib/containers/storage/overlay/9faa129516f8b57d84ef6fe856f4c4b5541f40be142ea5fd60fe073dde68955d/merged","created":"2021-08-13T00:23:43.589811761Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T00:23:42.351516549Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"01e1f4e495c3311ccc20368c1e385f74\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-test-preload-20210813002243-676638_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:23:43.457291029Z","io.kubernetes.cri-o.HostName":"test-preload-20210813002243-676638","io.k
ubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-controller-manager-test-preload-20210813002243-676638","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210813002243-676638\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"01e1f4e495c3311ccc20368c1e385f74\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210813002243-676638_01e1f4e495c3311ccc20368c1e385f74/1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-test-preload-20210813002243-676638\",\"uid\":\"01e1
f4e495c3311ccc20368c1e385f74\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9faa129516f8b57d84ef6fe856f4c4b5541f40be142ea5fd60fe073dde68955d/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-test-preload-20210813002243-676638_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c
3d4c53/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210813002243-676638","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.hash":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.seen":"2021-08-13T00:23:42.351516549Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1b3b5683f4a9ff053524367e87259b91d3dc153e616745696f2011092fb1d1df","pid":4343,"status":"running","bundle":"/run/containers/storage/overlay-containers/1b3b5683f4a9ff053524367e87259b91d3dc153e616745696f2011092fb1d1df/userdata","rootfs":"/var/lib/containers/storage/overlay/fe5bdcb25d452c87183efb98af4812b65473af339b1aa491cd2379a283d14d40/merged","created":"2021-08-13T00:24:29.133508578Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"64169340","io.kubernetes.container.name":"coredns","io.kube
rnetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"64169340\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.c
ri-o.ContainerID":"1b3b5683f4a9ff053524367e87259b91d3dc153e616745696f2011092fb1d1df","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:24:28.996050867Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.5","io.kubernetes.cri-o.ImageRef":"70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-fvjzm\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"40a844df-90e2-4539-a9c0-ff1b20374ebf\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-6955765f44-fvjzm_40a844df-90e2-4539-a9c0-ff1b20374ebf/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fe5bdcb25d452c87183efb98af4
812b65473af339b1aa491cd2379a283d14d40/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-6955765f44-fvjzm_kube-system_40a844df-90e2-4539-a9c0-ff1b20374ebf_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8f07e8a035ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8f07e8a035ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c7","io.kubernetes.cri-o.SandboxName":"k8s_coredns-6955765f44-fvjzm_kube-system_40a844df-90e2-4539-a9c0-ff1b20374ebf_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/40a844df-90e2-4539-a9c0-ff1b20374ebf/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/40a844df-90e2-4539-a9c0-ff1b20374ebf/etc-hos
ts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/40a844df-90e2-4539-a9c0-ff1b20374ebf/containers/coredns/1f593a55\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/40a844df-90e2-4539-a9c0-ff1b20374ebf/volumes/kubernetes.io~secret/coredns-token-4sxrr\",\"readonly\":true}]","io.kubernetes.pod.name":"coredns-6955765f44-fvjzm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"40a844df-90e2-4539-a9c0-ff1b20374ebf","kubernetes.io/config.seen":"2021-08-13T00:24:06.706851963Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a5a4a41062eaa52c37a212a8b205aef44a432dae42172939cdfe77658bf13de","pid":2780,"status":"running","bundle":"/run/containers/storage/overlay-contai
ners/2a5a4a41062eaa52c37a212a8b205aef44a432dae42172939cdfe77658bf13de/userdata","rootfs":"/var/lib/containers/storage/overlay/aa17ad885be3ca6ef48113b1211f1fb9d786ca4fd2a374185c0309cf414a6a00/merged","created":"2021-08-13T00:23:43.901428666Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99930feb","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99930feb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2a5a4a41062eaa52c37a212a8b205aef44a432dae42172939cdfe77658bf13de","io.kubernetes.cri-o.ContainerType":"container","io
.kubernetes.cri-o.Created":"2021-08-13T00:23:43.637903339Z","io.kubernetes.cri-o.Image":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.17.0","io.kubernetes.cri-o.ImageRef":"78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-test-preload-20210813002243-676638\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"bb577061a17ad23cfbbf52e9419bf32a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-test-preload-20210813002243-676638_bb577061a17ad23cfbbf52e9419bf32a/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/aa17ad885be3ca6ef48113b1211f1fb9d786ca4fd2a374185c0309cf414a6a00/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-schedu
ler-test-preload-20210813002243-676638_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-test-preload-20210813002243-676638_kube-system_bb577061a17ad23cfbbf52e9419bf32a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/bb577061a17ad23cfbbf52e9419bf32a/containers/kube-scheduler/9eb76d53\",\"readonly\":false},{\"container_path\":\"/etc/kubernetes/sc
heduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-scheduler-test-preload-20210813002243-676638","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.hash":"bb577061a17ad23cfbbf52e9419bf32a","kubernetes.io/config.seen":"2021-08-13T00:23:42.351518107Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e4137d0d5a8e2c1060d443f99b58a2aceff313d1cd64d742636611ae3b4e1a2","pid":4041,"status":"running","bundle":"/run/containers/storage/overlay-containers/2e4137d0d5a8e2c1060d443f99b58a2aceff313d1cd64d742636611ae3b4e1a2/userdata","rootfs":"/var/lib/containers/storage/overlay/f6234695a00c76aa5da5d5ee3e5679da22b13ddb4dca8d9b7a56915e75d60895/merged","created":"2021-08-13T00:24:12.75348296Z","
annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d80cf235","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d80cf235\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e4137d0d5a8e2c1060d443f99b58a2aceff313d1cd64d742636611ae3b4e1a2","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:24:12.628623571Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindne
td:v20210326-1e038dc5","io.kubernetes.cri-o.ImageRef":"6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-xzvv2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d33fcc36-e797-4acb-862f-81982ea3bffa\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-xzvv2_d33fcc36-e797-4acb-862f-81982ea3bffa/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f6234695a00c76aa5da5d5ee3e5679da22b13ddb4dca8d9b7a56915e75d60895/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-xzvv2_kube-system_d33fcc36-e797-4acb-862f-81982ea3bffa_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3d524154e8fb38f6e5ace19e1b0630efc7171651d8a152ff6108790016eee219/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3d524154e8fb38f6e5ace19e
1b0630efc7171651d8a152ff6108790016eee219","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-xzvv2_kube-system_d33fcc36-e797-4acb-862f-81982ea3bffa_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d33fcc36-e797-4acb-862f-81982ea3bffa/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d33fcc36-e797-4acb-862f-81982ea3bffa/containers/kindnet-cni/0ff0e365\",\"readonly\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kub
elet/pods/d33fcc36-e797-4acb-862f-81982ea3bffa/volumes/kubernetes.io~secret/kindnet-token-bfkct\",\"readonly\":true}]","io.kubernetes.pod.name":"kindnet-xzvv2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d33fcc36-e797-4acb-862f-81982ea3bffa","kubernetes.io/config.seen":"2021-08-13T00:24:05.481571207Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3d524154e8fb38f6e5ace19e1b0630efc7171651d8a152ff6108790016eee219","pid":3601,"status":"running","bundle":"/run/containers/storage/overlay-containers/3d524154e8fb38f6e5ace19e1b0630efc7171651d8a152ff6108790016eee219/userdata","rootfs":"/var/lib/containers/storage/overlay/118a5ad58aad4a344f5a8b9bdb7e29fcccb3fe788356db63ef2c34997942bc3b/merged","created":"2021-08-13T00:24:05.91767184Z","annotations":{"app":"kindnet","controller-revision-hash":"5998
5d8787","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2021-08-13T00:24:05.481571207Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"3d524154e8fb38f6e5ace19e1b0630efc7171651d8a152ff6108790016eee219","io.kubernetes.cri-o.ContainerName":"k8s_POD_kindnet-xzvv2_kube-system_d33fcc36-e797-4acb-862f-81982ea3bffa_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:24:05.807854467Z","io.kubernetes.cri-o.HostName":"test-preload-20210813002243-676638","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/3d524154e8fb38f6e5ace19e1b0630efc7171651d8a152ff6108790016eee219/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kindnet-xzvv2","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system
\",\"io.kubernetes.pod.name\":\"kindnet-xzvv2\",\"pod-template-generation\":\"1\",\"k8s-app\":\"kindnet\",\"app\":\"kindnet\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"node\",\"io.kubernetes.pod.uid\":\"d33fcc36-e797-4acb-862f-81982ea3bffa\",\"controller-revision-hash\":\"59985d8787\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-xzvv2_d33fcc36-e797-4acb-862f-81982ea3bffa/3d524154e8fb38f6e5ace19e1b0630efc7171651d8a152ff6108790016eee219.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-xzvv2\",\"uid\":\"d33fcc36-e797-4acb-862f-81982ea3bffa\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/118a5ad58aad4a344f5a8b9bdb7e29fcccb3fe788356db63ef2c34997942bc3b/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-xzvv2_kube-system_d33fcc36-e797-4acb-862f-81982ea3bffa_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernete
s.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3d524154e8fb38f6e5ace19e1b0630efc7171651d8a152ff6108790016eee219/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3d524154e8fb38f6e5ace19e1b0630efc7171651d8a152ff6108790016eee219","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/3d524154e8fb38f6e5ace19e1b0630efc7171651d8a152ff6108790016eee219/userdata/shm","io.kubernetes.pod.name":"kindnet-xzvv2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"d33fcc36-e797-4acb-862f-81982ea3bffa","k8s-app":"kindnet","kubernetes.io/config.seen":"2021-08-13T00:24:05.481571207Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1","tier":"node"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8f07e8a035ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c7","pid":43
11,"status":"running","bundle":"/run/containers/storage/overlay-containers/8f07e8a035ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c7/userdata","rootfs":"/var/lib/containers/storage/overlay/859bdf91d31ff07fa3b4aecb1b7df3683cef27458c8649b3940dcc53f5249120/merged","created":"2021-08-13T00:24:28.953650941Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T00:24:06.706851963Z\",\"kubernetes.io/config.source\":\"api\"}","io.kubernetes.cri-o.CNIResult":"{\"cniVersion\":\"0.4.0\",\"interfaces\":[{\"name\":\"veth1c58a79d\",\"mac\":\"c6:48:2a:1e:fb:f6\"},{\"name\":\"eth0\",\"mac\":\"42:a5:54:47:c7:9b\",\"sandbox\":\"/var/run/netns/0e4d22ab-4634-4776-9b26-a73cb085116e\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.244.0.2/24\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"dns\":{}}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"8f07e8a0
35ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c7","io.kubernetes.cri-o.ContainerName":"k8s_POD_coredns-6955765f44-fvjzm_kube-system_40a844df-90e2-4539-a9c0-ff1b20374ebf_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:24:28.813747878Z","io.kubernetes.cri-o.HostName":"coredns-6955765f44-fvjzm","io.kubernetes.cri-o.HostNetwork":"false","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/8f07e8a035ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c7/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"coredns-6955765f44-fvjzm","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"coredns-6955765f44-fvjzm\",\"pod-template-hash\":\"6955765f44\",\"io.kubernetes.container.name\":\"POD\",\"k8s-app\":\"kube-dns\",\"io.kubernetes.pod.uid\":\"40a844df-90e2-4539-a9c0-ff1b20374ebf\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system
_coredns-6955765f44-fvjzm_40a844df-90e2-4539-a9c0-ff1b20374ebf/8f07e8a035ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c7.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns-6955765f44-fvjzm\",\"uid\":\"40a844df-90e2-4539-a9c0-ff1b20374ebf\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/859bdf91d31ff07fa3b4aecb1b7df3683cef27458c8649b3940dcc53f5249120/merged","io.kubernetes.cri-o.Name":"k8s_coredns-6955765f44-fvjzm_kube-system_40a844df-90e2-4539-a9c0-ff1b20374ebf_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"false","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8f07e8a035ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c7/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"8f07e8a035ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c
7","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/8f07e8a035ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c7/userdata/shm","io.kubernetes.pod.name":"coredns-6955765f44-fvjzm","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"40a844df-90e2-4539-a9c0-ff1b20374ebf","k8s-app":"kube-dns","kubernetes.io/config.seen":"2021-08-13T00:24:06.706851963Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-hash":"6955765f44"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93238e6cef49886ac2a89e6d443b6d4f401750c66ca00e40674719670485ae18","pid":2799,"status":"running","bundle":"/run/containers/storage/overlay-containers/93238e6cef49886ac2a89e6d443b6d4f401750c66ca00e40674719670485ae18/userdata","rootfs":"/var/lib/containers/storage/overlay/f516bc06e1dcf5cba8ae7ffb23383509eb2a3d429cd7f3b8959d52b394662cc3/merged","created":"2021-08-13T00:23:43.945481489Z","annotations":{"io.co
ntainer.manager":"cri-o","io.kubernetes.container.hash":"ec604138","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ec604138\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"93238e6cef49886ac2a89e6d443b6d4f401750c66ca00e40674719670485ae18","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:23:43.715197405Z","io.kubernetes.cri-o.Image":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.17.0","io.kubernetes.cri-o.I
mageRef":"5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-test-preload-20210813002243-676638\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"01e1f4e495c3311ccc20368c1e385f74\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-test-preload-20210813002243-676638_01e1f4e495c3311ccc20368c1e385f74/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f516bc06e1dcf5cba8ae7ffb23383509eb2a3d429cd7f3b8959d52b394662cc3/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-test-preload-20210813002243-676638_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1234ecb4bec235cdb9
4a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-test-preload-20210813002243-676638_kube-system_01e1f4e495c3311ccc20368c1e385f74_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/01e1f4e495c3311ccc20368c1e385f74/containers/kube-controller-manager/30bf06e1\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/01e1f4e495c3311ccc20368c1e385f74/etc-hosts\",\"readonly\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true},{\"contain
er_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false}]","io.kubernetes.pod.name":"kube-controller-manager-test-preload-20210813002243-676638","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.hash":"01e1f4e495c3311ccc20368c1e385f74","kubernetes.io/config.seen":"2021-08-13T00:23:42.351516549Z","kubernetes.i
o/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e243acb48e2ba3b","pid":3872,"status":"running","bundle":"/run/containers/storage/overlay-containers/9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e243acb48e2ba3b/userdata","rootfs":"/var/lib/containers/storage/overlay/02b230d8afe859dd0457747b7a20615a44a3a45bf214511ac7a4995152ca91ed/merged","created":"2021-08-13T00:24:09.765648035Z","annotations":{"addonmanager.kubernetes.io/mode":"Reconcile","integration-test":"storage-provisioner","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T00:24:06.806025824Z\",\"kubernetes.io/config.source\":\"api\",\"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"annota
tions\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"integration-test\\\":\\\"storage-provisioner\\\"},\\\"name\\\":\\\"storage-provisioner\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"/storage-provisioner\\\"],\\\"image\\\":\\\"gcr.io/k8s-minikube/storage-provisioner:v5\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"storage-provisioner\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"storage-provisioner\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/tmp\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"tmp\\\"}]}}\\n\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e243acb48e2ba3b","io.kubernetes.cri-o.ContainerName":"k8s_POD_storage-provisioner_kube-system_f2d85412-2403-46f6-a704-4513ff9bcfa6_0","io.kubernetes.cri-o.
ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:24:09.666531458Z","io.kubernetes.cri-o.HostName":"test-preload-20210813002243-676638","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e243acb48e2ba3b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"storage-provisioner","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"f2d85412-2403-46f6-a704-4513ff9bcfa6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"integration-test\":\"storage-provisioner\",\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_f2d85412-2403-46f6-a704-4513ff9bcfa6/9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e243acb48e2ba3b.log","io.kubernetes.cri-o.Metadata":"{\"name\"
:\"storage-provisioner\",\"uid\":\"f2d85412-2403-46f6-a704-4513ff9bcfa6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/02b230d8afe859dd0457747b7a20615a44a3a45bf214511ac7a4995152ca91ed/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_kube-system_f2d85412-2403-46f6-a704-4513ff9bcfa6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e243acb48e2ba3b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e243acb48e2ba3b","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e
243acb48e2ba3b/userdata/shm","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"f2d85412-2403-46f6-a704-4513ff9bcfa6","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T00:24:06.806025824Z","kubernetes.io/config.source":"api","org.systemd.property.Collect
Mode":"'inactive-or-failed'"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a6144a75d08798b823d12d57d635a39524409da42b1abc5e902f8bae12d391e9","pid":2811,"status":"running","bundle":"/run/containers/storage/overlay-containers/a6144a75d08798b823d12d57d635a39524409da42b1abc5e902f8bae12d391e9/userdata","rootfs":"/var/lib/containers/storage/overlay/f8da75f7bb570371f9ffdb97c0613d960899c26e2734836d7d096d6de8c44287/merged","created":"2021-08-13T00:23:43.945465706Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"db09dd8","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"db09dd8\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"Fil
e\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a6144a75d08798b823d12d57d635a39524409da42b1abc5e902f8bae12d391e9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:23:43.725160109Z","io.kubernetes.cri-o.Image":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-test-preload-20210813002243-676638\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"72f016a5582028266313238b626424a8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210813002243-676638_72f016a5582028266313238b626424a8/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containe
rs/storage/overlay/f8da75f7bb570371f9ffdb97c0613d960899c26e2734836d7d096d6de8c44287/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-test-preload-20210813002243-676638_kube-system_72f016a5582028266313238b626424a8_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6","io.kubernetes.cri-o.SandboxName":"k8s_etcd-test-preload-20210813002243-676638_kube-system_72f016a5582028266313238b626424a8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/72f016a5582028266313238b626424a8/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/72f016a558
2028266313238b626424a8/containers/etcd/152bdc45\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false}]","io.kubernetes.pod.name":"etcd-test-preload-20210813002243-676638","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"72f016a5582028266313238b626424a8","kubernetes.io/config.hash":"72f016a5582028266313238b626424a8","kubernetes.io/config.seen":"2021-08-13T00:23:42.351497777Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3","pid":2664,"status":"running","bundle":"/run/containers/storage/overlay-containers/c0ed23ec9f5bde6d89a59994134c2e4c
16453739fdae950be06d2e67faf1d7a3/userdata","rootfs":"/var/lib/containers/storage/overlay/63f0258216301d6aa2abf84f07a4b8f01a030e2e93794f96d1fe211383b1465e/merged","created":"2021-08-13T00:23:43.557510318Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"f8c1872d6958c845ffffb18f158fd9df\",\"kubernetes.io/config.seen\":\"2021-08-13T00:23:42.351511589Z\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-test-preload-20210813002243-676638_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:23:43.453074451Z","io.kubernetes.cri-o.HostName":"test-preload-20210813002243-676638","io.kubernetes.cri-o.HostNetwork":"tr
ue","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-apiserver-test-preload-20210813002243-676638","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"f8c1872d6958c845ffffb18f158fd9df\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-test-preload-20210813002243-676638\",\"component\":\"kube-apiserver\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-test-preload-20210813002243-676638_f8c1872d6958c845ffffb18f158fd9df/c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-test-preload-20210813002243-676638\",\"uid\":\"f8c1872d6958c845ffffb18f158fd9df\",\"namespace\":\"kube-system\"}","io.kubernetes
.cri-o.MountPoint":"/var/lib/containers/storage/overlay/63f0258216301d6aa2abf84f07a4b8f01a030e2e93794f96d1fe211383b1465e/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-test-preload-20210813002243-676638_kube-system_f8c1872d6958c845ffffb18f158fd9df_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-test-preload-20210813002
243-676638","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.hash":"f8c1872d6958c845ffffb18f158fd9df","kubernetes.io/config.seen":"2021-08-13T00:23:42.351511589Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ced11d540ac880d0f874487cbfd54fa7498fcc42b4cc0fb1c1e8e9c2bf31abea","pid":3905,"status":"running","bundle":"/run/containers/storage/overlay-containers/ced11d540ac880d0f874487cbfd54fa7498fcc42b4cc0fb1c1e8e9c2bf31abea/userdata","rootfs":"/var/lib/containers/storage/overlay/c237409aa09d993083129fac79763350ee89f7e1b215fd5c274363156d52f686/merged","created":"2021-08-13T00:24:09.985593883Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cd2e4a94","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath"
:"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cd2e4a94\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ced11d540ac880d0f874487cbfd54fa7498fcc42b4cc0fb1c1e8e9c2bf31abea","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:24:09.826559521Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",
\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f2d85412-2403-46f6-a704-4513ff9bcfa6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_f2d85412-2403-46f6-a704-4513ff9bcfa6/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c237409aa09d993083129fac79763350ee89f7e1b215fd5c274363156d52f686/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_f2d85412-2403-46f6-a704-4513ff9bcfa6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e243acb48e2ba3b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e243acb48e2ba3b","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_f2d85412-2403-46f6-a704-4513ff9bcfa6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernete
s.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f2d85412-2403-46f6-a704-4513ff9bcfa6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f2d85412-2403-46f6-a704-4513ff9bcfa6/containers/storage-provisioner/7fb4c760\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f2d85412-2403-46f6-a704-4513ff9bcfa6/volumes/kubernetes.io~secret/storage-provisioner-token-d7wbl\",\"readonly\":true}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f2d85412-2403-46f6-a704-4513ff9bcfa6","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":
\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2021-08-13T00:24:06.806025824Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6","pid":2676,"status":"running","bundle":"/run/conta
iners/storage/overlay-containers/d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6/userdata","rootfs":"/var/lib/containers/storage/overlay/1182ae5f1768551f6299143c940fc9c5e6990d2608ba282caed50a150d94b598/merged","created":"2021-08-13T00:23:43.589812349Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T00:23:42.351497777Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"72f016a5582028266313238b626424a8\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-test-preload-20210813002243-676638_kube-system_72f016a5582028266313238b626424a8_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:23:43.464052345Z","io.kubernetes.cri-o.HostName":"test-preload-20210813002243-
676638","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"etcd-test-preload-20210813002243-676638","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"etcd-test-preload-20210813002243-676638\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"72f016a5582028266313238b626424a8\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-test-preload-20210813002243-676638_72f016a5582028266313238b626424a8/d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-test-preload-20210813002243-676638\",\"uid\":\"72f016a5582028266313238b626424a8\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-
o.MountPoint":"/var/lib/containers/storage/overlay/1182ae5f1768551f6299143c940fc9c5e6990d2608ba282caed50a150d94b598/merged","io.kubernetes.cri-o.Name":"k8s_etcd-test-preload-20210813002243-676638_kube-system_72f016a5582028266313238b626424a8_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6/userdata/shm","io.kubernetes.pod.name":"etcd-test-preload-20210813002243-676638","io.kubernete
s.pod.namespace":"kube-system","io.kubernetes.pod.uid":"72f016a5582028266313238b626424a8","kubernetes.io/config.hash":"72f016a5582028266313238b626424a8","kubernetes.io/config.seen":"2021-08-13T00:23:42.351497777Z","kubernetes.io/config.source":"file","org.systemd.property.CollectMode":"'inactive-or-failed'","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b","pid":3594,"status":"running","bundle":"/run/containers/storage/overlay-containers/db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b/userdata","rootfs":"/var/lib/containers/storage/overlay/bb8b6544cf42a6388f57ed4445e4598e41feef0e983720c276d6ceb798ded8f1/merged","created":"2021-08-13T00:24:05.901707893Z","annotations":{"controller-revision-hash":"68bd87b66","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2021-08-13T00:24:05.489917535Z\",\"kubernetes.io/config.source\":\"
api\"}","io.kubernetes.cri-o.CgroupParent":"","io.kubernetes.cri-o.ContainerID":"db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-proxy-c4knf_kube-system_623a45e5-f85f-4e51-a632-2053eaa19cd6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2021-08-13T00:24:05.811131581Z","io.kubernetes.cri-o.HostName":"test-preload-20210813002243-676638","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/run/containers/storage/overlay-containers/db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b/userdata/hostname","io.kubernetes.cri-o.Image":"k8s.gcr.io/pause:3.2","io.kubernetes.cri-o.KubeName":"kube-proxy-c4knf","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"623a45e5-f85f-4e51-a632-2053eaa19cd6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-proxy-c4knf\",\"io.kubernetes.container.name\":\"POD\",\"pod-template-generation\":\"1\",\"k8s-app\":
\"kube-proxy\",\"controller-revision-hash\":\"68bd87b66\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-c4knf_623a45e5-f85f-4e51-a632-2053eaa19cd6/db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy-c4knf\",\"uid\":\"623a45e5-f85f-4e51-a632-2053eaa19cd6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bb8b6544cf42a6388f57ed4445e4598e41feef0e983720c276d6ceb798ded8f1/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy-c4knf_kube-system_623a45e5-f85f-4e51-a632-2053eaa19cd6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHan
dler":"","io.kubernetes.cri-o.SandboxID":"db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.ShmPath":"/run/containers/storage/overlay-containers/db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b/userdata/shm","io.kubernetes.pod.name":"kube-proxy-c4knf","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"623a45e5-f85f-4e51-a632-2053eaa19cd6","k8s-app":"kube-proxy","kubernetes.io/config.seen":"2021-08-13T00:24:05.489917535Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","pod-template-generation":"1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e62240b4305b268ea5c6bf1f0fc86a635a19354c96d3cf79311eca79dfda55ca","pid":3646,"status":"running","bundle":"/run/containers/storage/overlay-containers/e62240b4305b268ea5c6bf1f0fc86a635a19354c96d3cf79311eca79dfda55ca/userdata","rootfs":"/var/lib/containers/storage/overlay/e67e963dd49ed6a73eedc2b80b244f4174c500aa8
17b9f95a22651697b96b4b0/merged","created":"2021-08-13T00:24:06.041623766Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7fc03172","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7fc03172\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e62240b4305b268ea5c6bf1f0fc86a635a19354c96d3cf79311eca79dfda55ca","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2021-08-13T00:24:05.96277817Z","io.kubernetes.cri-o.Image":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cr
i-o.ImageName":"k8s.gcr.io/kube-proxy:v1.17.0","io.kubernetes.cri-o.ImageRef":"7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-c4knf\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"623a45e5-f85f-4e51-a632-2053eaa19cd6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-c4knf_623a45e5-f85f-4e51-a632-2053eaa19cd6/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e67e963dd49ed6a73eedc2b80b244f4174c500aa817b9f95a22651697b96b4b0/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-c4knf_kube-system_623a45e5-f85f-4e51-a632-2053eaa19cd6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b/userdata/resolv.conf","io.kubernetes.cri-o.Sandbox
ID":"db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-c4knf_kube-system_623a45e5-f85f-4e51-a632-2053eaa19cd6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/623a45e5-f85f-4e51-a632-2053eaa19cd6/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/623a45e5-f85f-4e51-a632-2053eaa19cd6/containers/kube-proxy/87a21bfd\",\"readonly\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/623a45e5-f85f-4e51-a632-2053eaa19cd6/volumes/kubernetes.io~configmap/kube-pro
xy\",\"readonly\":true},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/623a45e5-f85f-4e51-a632-2053eaa19cd6/volumes/kubernetes.io~secret/kube-proxy-token-lzch4\",\"readonly\":true}]","io.kubernetes.pod.name":"kube-proxy-c4knf","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"623a45e5-f85f-4e51-a632-2053eaa19cd6","kubernetes.io/config.seen":"2021-08-13T00:24:05.489917535Z","kubernetes.io/config.source":"api","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0813 00:24:54.954203  807704 cri.go:113] list returned 16 containers
	I0813 00:24:54.954222  807704 cri.go:116] container: {ID:06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc Status:running}
	I0813 00:24:54.954235  807704 cri.go:118] skipping 06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc - not in ps
	I0813 00:24:54.954239  807704 cri.go:116] container: {ID:099efe330404d5566ed4ca3f92b52774a0906317bf729a003a34c8636522b6fc Status:running}
	I0813 00:24:54.954245  807704 cri.go:122] skipping {099efe330404d5566ed4ca3f92b52774a0906317bf729a003a34c8636522b6fc running}: state = "running", want "paused"
	I0813 00:24:54.954257  807704 cri.go:116] container: {ID:1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53 Status:running}
	I0813 00:24:54.954262  807704 cri.go:118] skipping 1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53 - not in ps
	I0813 00:24:54.954265  807704 cri.go:116] container: {ID:1b3b5683f4a9ff053524367e87259b91d3dc153e616745696f2011092fb1d1df Status:running}
	I0813 00:24:54.954269  807704 cri.go:122] skipping {1b3b5683f4a9ff053524367e87259b91d3dc153e616745696f2011092fb1d1df running}: state = "running", want "paused"
	I0813 00:24:54.954275  807704 cri.go:116] container: {ID:2a5a4a41062eaa52c37a212a8b205aef44a432dae42172939cdfe77658bf13de Status:running}
	I0813 00:24:54.954279  807704 cri.go:122] skipping {2a5a4a41062eaa52c37a212a8b205aef44a432dae42172939cdfe77658bf13de running}: state = "running", want "paused"
	I0813 00:24:54.954283  807704 cri.go:116] container: {ID:2e4137d0d5a8e2c1060d443f99b58a2aceff313d1cd64d742636611ae3b4e1a2 Status:running}
	I0813 00:24:54.954287  807704 cri.go:122] skipping {2e4137d0d5a8e2c1060d443f99b58a2aceff313d1cd64d742636611ae3b4e1a2 running}: state = "running", want "paused"
	I0813 00:24:54.954292  807704 cri.go:116] container: {ID:3d524154e8fb38f6e5ace19e1b0630efc7171651d8a152ff6108790016eee219 Status:running}
	I0813 00:24:54.954296  807704 cri.go:118] skipping 3d524154e8fb38f6e5ace19e1b0630efc7171651d8a152ff6108790016eee219 - not in ps
	I0813 00:24:54.954300  807704 cri.go:116] container: {ID:8f07e8a035ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c7 Status:running}
	I0813 00:24:54.954304  807704 cri.go:118] skipping 8f07e8a035ebd8bfd9803628db78d55607395460d6fd8a5fe3ce077c985d60c7 - not in ps
	I0813 00:24:54.954307  807704 cri.go:116] container: {ID:93238e6cef49886ac2a89e6d443b6d4f401750c66ca00e40674719670485ae18 Status:running}
	I0813 00:24:54.954311  807704 cri.go:122] skipping {93238e6cef49886ac2a89e6d443b6d4f401750c66ca00e40674719670485ae18 running}: state = "running", want "paused"
	I0813 00:24:54.954316  807704 cri.go:116] container: {ID:9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e243acb48e2ba3b Status:running}
	I0813 00:24:54.954324  807704 cri.go:118] skipping 9963f3337ea1b1cab36ab01898d2dbc4c5e136956e679f368e243acb48e2ba3b - not in ps
	I0813 00:24:54.954341  807704 cri.go:116] container: {ID:a6144a75d08798b823d12d57d635a39524409da42b1abc5e902f8bae12d391e9 Status:running}
	I0813 00:24:54.954345  807704 cri.go:122] skipping {a6144a75d08798b823d12d57d635a39524409da42b1abc5e902f8bae12d391e9 running}: state = "running", want "paused"
	I0813 00:24:54.954349  807704 cri.go:116] container: {ID:c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3 Status:running}
	I0813 00:24:54.954354  807704 cri.go:118] skipping c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3 - not in ps
	I0813 00:24:54.954357  807704 cri.go:116] container: {ID:ced11d540ac880d0f874487cbfd54fa7498fcc42b4cc0fb1c1e8e9c2bf31abea Status:running}
	I0813 00:24:54.954362  807704 cri.go:122] skipping {ced11d540ac880d0f874487cbfd54fa7498fcc42b4cc0fb1c1e8e9c2bf31abea running}: state = "running", want "paused"
	I0813 00:24:54.954369  807704 cri.go:116] container: {ID:d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6 Status:running}
	I0813 00:24:54.954373  807704 cri.go:118] skipping d04ecd72820f6b805af4315247157b01240a153d36b852df2ce2830a188c68c6 - not in ps
	I0813 00:24:54.954376  807704 cri.go:116] container: {ID:db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b Status:running}
	I0813 00:24:54.954380  807704 cri.go:118] skipping db83a08f19dfbd19c10df470abc53cd30a298eaa1204a6fc6ef9efceb8a22a3b - not in ps
	I0813 00:24:54.954383  807704 cri.go:116] container: {ID:e62240b4305b268ea5c6bf1f0fc86a635a19354c96d3cf79311eca79dfda55ca Status:running}
	I0813 00:24:54.954388  807704 cri.go:122] skipping {e62240b4305b268ea5c6bf1f0fc86a635a19354c96d3cf79311eca79dfda55ca running}: state = "running", want "paused"
	I0813 00:24:54.954428  807704 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 00:24:54.962558  807704 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 00:24:54.962589  807704 kubeadm.go:600] restartCluster start
	I0813 00:24:54.962638  807704 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 00:24:54.970089  807704 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 00:24:54.970832  807704 kubeconfig.go:93] found "test-preload-20210813002243-676638" server: "https://192.168.49.2:8443"
	I0813 00:24:54.971270  807704 kapi.go:59] client config for test-preload-20210813002243-676638: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20
210813002243-676638/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:24:54.972960  807704 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 00:24:54.980357  807704 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-08-13 00:23:37.990451047 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-08-13 00:24:54.587624385 +0000
	@@ -40,7 +40,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.17.0
	+kubernetesVersion: v1.17.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0813 00:24:54.980380  807704 kubeadm.go:1032] stopping kube-system containers ...
	I0813 00:24:54.980395  807704 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0813 00:24:54.980442  807704 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 00:24:55.006281  807704 cri.go:76] found id: "1b3b5683f4a9ff053524367e87259b91d3dc153e616745696f2011092fb1d1df"
	I0813 00:24:55.006308  807704 cri.go:76] found id: "2e4137d0d5a8e2c1060d443f99b58a2aceff313d1cd64d742636611ae3b4e1a2"
	I0813 00:24:55.006315  807704 cri.go:76] found id: "ced11d540ac880d0f874487cbfd54fa7498fcc42b4cc0fb1c1e8e9c2bf31abea"
	I0813 00:24:55.006321  807704 cri.go:76] found id: "e62240b4305b268ea5c6bf1f0fc86a635a19354c96d3cf79311eca79dfda55ca"
	I0813 00:24:55.006326  807704 cri.go:76] found id: "099efe330404d5566ed4ca3f92b52774a0906317bf729a003a34c8636522b6fc"
	I0813 00:24:55.006338  807704 cri.go:76] found id: "93238e6cef49886ac2a89e6d443b6d4f401750c66ca00e40674719670485ae18"
	I0813 00:24:55.006342  807704 cri.go:76] found id: "a6144a75d08798b823d12d57d635a39524409da42b1abc5e902f8bae12d391e9"
	I0813 00:24:55.006347  807704 cri.go:76] found id: "2a5a4a41062eaa52c37a212a8b205aef44a432dae42172939cdfe77658bf13de"
	I0813 00:24:55.006350  807704 cri.go:76] found id: ""
	I0813 00:24:55.006355  807704 cri.go:221] Stopping containers: [1b3b5683f4a9ff053524367e87259b91d3dc153e616745696f2011092fb1d1df 2e4137d0d5a8e2c1060d443f99b58a2aceff313d1cd64d742636611ae3b4e1a2 ced11d540ac880d0f874487cbfd54fa7498fcc42b4cc0fb1c1e8e9c2bf31abea e62240b4305b268ea5c6bf1f0fc86a635a19354c96d3cf79311eca79dfda55ca 099efe330404d5566ed4ca3f92b52774a0906317bf729a003a34c8636522b6fc 93238e6cef49886ac2a89e6d443b6d4f401750c66ca00e40674719670485ae18 a6144a75d08798b823d12d57d635a39524409da42b1abc5e902f8bae12d391e9 2a5a4a41062eaa52c37a212a8b205aef44a432dae42172939cdfe77658bf13de]
	I0813 00:24:55.006403  807704 ssh_runner.go:149] Run: which crictl
	I0813 00:24:55.009627  807704 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 1b3b5683f4a9ff053524367e87259b91d3dc153e616745696f2011092fb1d1df 2e4137d0d5a8e2c1060d443f99b58a2aceff313d1cd64d742636611ae3b4e1a2 ced11d540ac880d0f874487cbfd54fa7498fcc42b4cc0fb1c1e8e9c2bf31abea e62240b4305b268ea5c6bf1f0fc86a635a19354c96d3cf79311eca79dfda55ca 099efe330404d5566ed4ca3f92b52774a0906317bf729a003a34c8636522b6fc 93238e6cef49886ac2a89e6d443b6d4f401750c66ca00e40674719670485ae18 a6144a75d08798b823d12d57d635a39524409da42b1abc5e902f8bae12d391e9 2a5a4a41062eaa52c37a212a8b205aef44a432dae42172939cdfe77658bf13de
	I0813 00:24:56.447318  807704 ssh_runner.go:189] Completed: sudo /usr/bin/crictl stop 1b3b5683f4a9ff053524367e87259b91d3dc153e616745696f2011092fb1d1df 2e4137d0d5a8e2c1060d443f99b58a2aceff313d1cd64d742636611ae3b4e1a2 ced11d540ac880d0f874487cbfd54fa7498fcc42b4cc0fb1c1e8e9c2bf31abea e62240b4305b268ea5c6bf1f0fc86a635a19354c96d3cf79311eca79dfda55ca 099efe330404d5566ed4ca3f92b52774a0906317bf729a003a34c8636522b6fc 93238e6cef49886ac2a89e6d443b6d4f401750c66ca00e40674719670485ae18 a6144a75d08798b823d12d57d635a39524409da42b1abc5e902f8bae12d391e9 2a5a4a41062eaa52c37a212a8b205aef44a432dae42172939cdfe77658bf13de: (1.437640527s)
	I0813 00:24:56.447398  807704 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 00:24:56.457898  807704 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 00:24:56.466000  807704 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5611 Aug 13 00:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5651 Aug 13 00:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2075 Aug 13 00:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5595 Aug 13 00:23 /etc/kubernetes/scheduler.conf
	
	I0813 00:24:56.466077  807704 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 00:24:56.473500  807704 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 00:24:56.480859  807704 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 00:24:56.488516  807704 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 00:24:56.495913  807704 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 00:24:56.503591  807704 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 00:24:56.503622  807704 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:24:56.551780  807704 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:24:57.271154  807704 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:24:57.432876  807704 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:24:57.497766  807704 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:24:57.613525  807704 api_server.go:50] waiting for apiserver process to appear ...
	I0813 00:24:57.613596  807704 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:24:58.194801  807704 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:24:58.694834  807704 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:24:59.194273  807704 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:24:59.215554  807704 api_server.go:70] duration metric: took 1.602028453s to wait for apiserver process to appear ...
	I0813 00:24:59.215587  807704 api_server.go:86] waiting for apiserver healthz status ...
	I0813 00:24:59.215600  807704 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 00:25:02.364712  807704 api_server.go:265] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 00:25:02.364753  807704 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 00:25:02.865168  807704 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 00:25:02.870297  807704 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 00:25:02.870333  807704 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 00:25:03.365553  807704 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 00:25:03.370249  807704 api_server.go:265] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 00:25:03.370281  807704 api_server.go:101] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 00:25:03.865254  807704 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 00:25:03.870107  807704 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 00:25:03.879963  807704 api_server.go:139] control plane version: v1.17.3
	I0813 00:25:03.879992  807704 api_server.go:129] duration metric: took 4.664398766s to wait for apiserver health ...
	I0813 00:25:03.880007  807704 cni.go:93] Creating CNI manager for ""
	I0813 00:25:03.880016  807704 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0813 00:25:03.882674  807704 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 00:25:03.882761  807704 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 00:25:03.887003  807704 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.17.3/kubectl ...
	I0813 00:25:03.887029  807704 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0813 00:25:03.901269  807704 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.17.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 00:25:04.096906  807704 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 00:25:04.106641  807704 system_pods.go:59] 8 kube-system pods found
	I0813 00:25:04.106679  807704 system_pods.go:61] "coredns-6955765f44-fvjzm" [40a844df-90e2-4539-a9c0-ff1b20374ebf] Running
	I0813 00:25:04.106684  807704 system_pods.go:61] "etcd-test-preload-20210813002243-676638" [5cb58bd6-f02d-49ec-8b90-9456ea038342] Running
	I0813 00:25:04.106688  807704 system_pods.go:61] "kindnet-xzvv2" [d33fcc36-e797-4acb-862f-81982ea3bffa] Running
	I0813 00:25:04.106700  807704 system_pods.go:61] "kube-apiserver-test-preload-20210813002243-676638" [e1256aa4-baa5-4cd6-b2f9-9e2d43ac918b] Pending
	I0813 00:25:04.106705  807704 system_pods.go:61] "kube-controller-manager-test-preload-20210813002243-676638" [bb86be8c-0c0f-490d-8752-6dc05ab37484] Pending
	I0813 00:25:04.106709  807704 system_pods.go:61] "kube-proxy-c4knf" [623a45e5-f85f-4e51-a632-2053eaa19cd6] Running
	I0813 00:25:04.106713  807704 system_pods.go:61] "kube-scheduler-test-preload-20210813002243-676638" [7526fd39-b27e-4183-b2ea-81ea6318d69d] Pending
	I0813 00:25:04.106716  807704 system_pods.go:61] "storage-provisioner" [f2d85412-2403-46f6-a704-4513ff9bcfa6] Running
	I0813 00:25:04.106723  807704 system_pods.go:74] duration metric: took 9.790877ms to wait for pod list to return data ...
	I0813 00:25:04.106733  807704 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:25:04.110054  807704 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 00:25:04.110127  807704 node_conditions.go:123] node cpu capacity is 8
	I0813 00:25:04.110145  807704 node_conditions.go:105] duration metric: took 3.406767ms to run NodePressure ...
	I0813 00:25:04.110171  807704 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 00:25:04.275601  807704 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 00:25:04.278814  807704 kubeadm.go:746] kubelet initialised
	I0813 00:25:04.278906  807704 kubeadm.go:747] duration metric: took 3.205974ms waiting for restarted kubelet to initialise ...
	I0813 00:25:04.278934  807704 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:25:04.282379  807704 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6955765f44-fvjzm" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:04.291445  807704 pod_ready.go:92] pod "coredns-6955765f44-fvjzm" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:04.291473  807704 pod_ready.go:81] duration metric: took 9.055973ms waiting for pod "coredns-6955765f44-fvjzm" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:04.291486  807704 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:04.296133  807704 pod_ready.go:92] pod "etcd-test-preload-20210813002243-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:04.296154  807704 pod_ready.go:81] duration metric: took 4.659983ms waiting for pod "etcd-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:04.296167  807704 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:05.805336  807704 pod_ready.go:92] pod "kube-apiserver-test-preload-20210813002243-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:05.805375  807704 pod_ready.go:81] duration metric: took 1.509199469s waiting for pod "kube-apiserver-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:05.805392  807704 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:05.809835  807704 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210813002243-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:05.809857  807704 pod_ready.go:81] duration metric: took 4.456582ms waiting for pod "kube-controller-manager-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:05.809869  807704 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4knf" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:06.099918  807704 pod_ready.go:92] pod "kube-proxy-c4knf" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:06.099941  807704 pod_ready.go:81] duration metric: took 290.066178ms waiting for pod "kube-proxy-c4knf" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:06.099951  807704 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:06.500021  807704 pod_ready.go:92] pod "kube-scheduler-test-preload-20210813002243-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:06.500045  807704 pod_ready.go:81] duration metric: took 400.087342ms waiting for pod "kube-scheduler-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:06.500057  807704 pod_ready.go:38] duration metric: took 2.221102673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:25:06.500076  807704 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 00:25:06.520619  807704 ops.go:34] apiserver oom_adj: -16
	I0813 00:25:06.520644  807704 kubeadm.go:604] restartCluster took 11.558047812s
	I0813 00:25:06.520651  807704 kubeadm.go:392] StartCluster complete in 11.633773961s
	I0813 00:25:06.520670  807704 settings.go:142] acquiring lock: {Name:mk8e048b414f35bb1583f1d1b3e929d90c1bd9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:25:06.520787  807704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:25:06.521423  807704 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk7dda383efa2f679c68affe6e459fff93248137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:25:06.522064  807704 kapi.go:59] client config for test-preload-20210813002243-676638: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20
210813002243-676638/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:25:07.032879  807704 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-20210813002243-676638" rescaled to 1
	I0813 00:25:07.032936  807704 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}
	I0813 00:25:07.035319  807704 out.go:177] * Verifying Kubernetes components...
	I0813 00:25:07.032977  807704 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 00:25:07.035388  807704 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:25:07.033008  807704 addons.go:342] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0813 00:25:07.035532  807704 addons.go:59] Setting storage-provisioner=true in profile "test-preload-20210813002243-676638"
	I0813 00:25:07.035541  807704 addons.go:59] Setting default-storageclass=true in profile "test-preload-20210813002243-676638"
	I0813 00:25:07.035549  807704 addons.go:135] Setting addon storage-provisioner=true in "test-preload-20210813002243-676638"
	W0813 00:25:07.035556  807704 addons.go:147] addon storage-provisioner should already be in state true
	I0813 00:25:07.035561  807704 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-20210813002243-676638"
	I0813 00:25:07.035585  807704 host.go:66] Checking if "test-preload-20210813002243-676638" exists ...
	I0813 00:25:07.035840  807704 cli_runner.go:115] Run: docker container inspect test-preload-20210813002243-676638 --format={{.State.Status}}
	I0813 00:25:07.036103  807704 cli_runner.go:115] Run: docker container inspect test-preload-20210813002243-676638 --format={{.State.Status}}
	I0813 00:25:07.081096  807704 kapi.go:59] client config for test-preload-20210813002243-676638: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20210813002243-676638/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/test-preload-20
210813002243-676638/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0813 00:25:07.088221  807704 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 00:25:07.088371  807704 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:25:07.088388  807704 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 00:25:07.088509  807704 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813002243-676638
	I0813 00:25:07.089148  807704 addons.go:135] Setting addon default-storageclass=true in "test-preload-20210813002243-676638"
	W0813 00:25:07.089168  807704 addons.go:147] addon default-storageclass should already be in state true
	I0813 00:25:07.089194  807704 host.go:66] Checking if "test-preload-20210813002243-676638" exists ...
	I0813 00:25:07.089624  807704 cli_runner.go:115] Run: docker container inspect test-preload-20210813002243-676638 --format={{.State.Status}}
	I0813 00:25:07.126350  807704 node_ready.go:35] waiting up to 6m0s for node "test-preload-20210813002243-676638" to be "Ready" ...
	I0813 00:25:07.126572  807704 start.go:716] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 00:25:07.128918  807704 node_ready.go:49] node "test-preload-20210813002243-676638" has status "Ready":"True"
	I0813 00:25:07.128937  807704 node_ready.go:38] duration metric: took 2.54805ms waiting for node "test-preload-20210813002243-676638" to be "Ready" ...
	I0813 00:25:07.128946  807704 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:25:07.133086  807704 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6955765f44-fvjzm" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:07.140779  807704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33343 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813002243-676638/id_rsa Username:docker}
	I0813 00:25:07.147064  807704 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 00:25:07.147088  807704 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 00:25:07.147153  807704 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20210813002243-676638
	I0813 00:25:07.188867  807704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33343 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/test-preload-20210813002243-676638/id_rsa Username:docker}
	I0813 00:25:07.231496  807704 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:25:07.279213  807704 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 00:25:07.300578  807704 pod_ready.go:92] pod "coredns-6955765f44-fvjzm" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:07.300598  807704 pod_ready.go:81] duration metric: took 167.469056ms waiting for pod "coredns-6955765f44-fvjzm" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:07.300609  807704 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:07.461517  807704 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 00:25:07.461551  807704 addons.go:344] enableAddons completed in 428.557193ms
	I0813 00:25:07.701130  807704 pod_ready.go:92] pod "etcd-test-preload-20210813002243-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:07.701154  807704 pod_ready.go:81] duration metric: took 400.537886ms waiting for pod "etcd-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:07.701168  807704 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:08.099986  807704 pod_ready.go:92] pod "kube-apiserver-test-preload-20210813002243-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:08.100008  807704 pod_ready.go:81] duration metric: took 398.827521ms waiting for pod "kube-apiserver-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:08.100023  807704 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:08.500363  807704 pod_ready.go:92] pod "kube-controller-manager-test-preload-20210813002243-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:08.500388  807704 pod_ready.go:81] duration metric: took 400.358406ms waiting for pod "kube-controller-manager-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:08.500400  807704 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4knf" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:08.900427  807704 pod_ready.go:92] pod "kube-proxy-c4knf" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:08.900449  807704 pod_ready.go:81] duration metric: took 400.042581ms waiting for pod "kube-proxy-c4knf" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:08.900461  807704 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:09.300435  807704 pod_ready.go:92] pod "kube-scheduler-test-preload-20210813002243-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:25:09.300460  807704 pod_ready.go:81] duration metric: took 399.991081ms waiting for pod "kube-scheduler-test-preload-20210813002243-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:25:09.300476  807704 pod_ready.go:38] duration metric: took 2.171517174s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:25:09.300497  807704 api_server.go:50] waiting for apiserver process to appear ...
	I0813 00:25:09.300547  807704 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:25:09.323260  807704 api_server.go:70] duration metric: took 2.290286259s to wait for apiserver process to appear ...
	I0813 00:25:09.323289  807704 api_server.go:86] waiting for apiserver healthz status ...
	I0813 00:25:09.323300  807704 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 00:25:09.328198  807704 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 00:25:09.329011  807704 api_server.go:139] control plane version: v1.17.3
	I0813 00:25:09.329031  807704 api_server.go:129] duration metric: took 5.737245ms to wait for apiserver health ...
	I0813 00:25:09.329038  807704 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 00:25:09.501533  807704 system_pods.go:59] 8 kube-system pods found
	I0813 00:25:09.501569  807704 system_pods.go:61] "coredns-6955765f44-fvjzm" [40a844df-90e2-4539-a9c0-ff1b20374ebf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 00:25:09.501575  807704 system_pods.go:61] "etcd-test-preload-20210813002243-676638" [5cb58bd6-f02d-49ec-8b90-9456ea038342] Running
	I0813 00:25:09.501583  807704 system_pods.go:61] "kindnet-xzvv2" [d33fcc36-e797-4acb-862f-81982ea3bffa] Running
	I0813 00:25:09.501588  807704 system_pods.go:61] "kube-apiserver-test-preload-20210813002243-676638" [e1256aa4-baa5-4cd6-b2f9-9e2d43ac918b] Running
	I0813 00:25:09.501593  807704 system_pods.go:61] "kube-controller-manager-test-preload-20210813002243-676638" [bb86be8c-0c0f-490d-8752-6dc05ab37484] Running
	I0813 00:25:09.501597  807704 system_pods.go:61] "kube-proxy-c4knf" [623a45e5-f85f-4e51-a632-2053eaa19cd6] Running
	I0813 00:25:09.501600  807704 system_pods.go:61] "kube-scheduler-test-preload-20210813002243-676638" [7526fd39-b27e-4183-b2ea-81ea6318d69d] Running
	I0813 00:25:09.501604  807704 system_pods.go:61] "storage-provisioner" [f2d85412-2403-46f6-a704-4513ff9bcfa6] Running
	I0813 00:25:09.501610  807704 system_pods.go:74] duration metric: took 172.566116ms to wait for pod list to return data ...
	I0813 00:25:09.501619  807704 default_sa.go:34] waiting for default service account to be created ...
	I0813 00:25:09.700584  807704 default_sa.go:45] found service account: "default"
	I0813 00:25:09.700616  807704 default_sa.go:55] duration metric: took 198.987782ms for default service account to be created ...
	I0813 00:25:09.700626  807704 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 00:25:09.901924  807704 system_pods.go:86] 8 kube-system pods found
	I0813 00:25:09.901963  807704 system_pods.go:89] "coredns-6955765f44-fvjzm" [40a844df-90e2-4539-a9c0-ff1b20374ebf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 00:25:09.901971  807704 system_pods.go:89] "etcd-test-preload-20210813002243-676638" [5cb58bd6-f02d-49ec-8b90-9456ea038342] Running
	I0813 00:25:09.901979  807704 system_pods.go:89] "kindnet-xzvv2" [d33fcc36-e797-4acb-862f-81982ea3bffa] Running
	I0813 00:25:09.901984  807704 system_pods.go:89] "kube-apiserver-test-preload-20210813002243-676638" [e1256aa4-baa5-4cd6-b2f9-9e2d43ac918b] Running
	I0813 00:25:09.901989  807704 system_pods.go:89] "kube-controller-manager-test-preload-20210813002243-676638" [bb86be8c-0c0f-490d-8752-6dc05ab37484] Running
	I0813 00:25:09.901993  807704 system_pods.go:89] "kube-proxy-c4knf" [623a45e5-f85f-4e51-a632-2053eaa19cd6] Running
	I0813 00:25:09.901996  807704 system_pods.go:89] "kube-scheduler-test-preload-20210813002243-676638" [7526fd39-b27e-4183-b2ea-81ea6318d69d] Running
	I0813 00:25:09.902000  807704 system_pods.go:89] "storage-provisioner" [f2d85412-2403-46f6-a704-4513ff9bcfa6] Running
	I0813 00:25:09.902006  807704 system_pods.go:126] duration metric: took 201.37545ms to wait for k8s-apps to be running ...
	I0813 00:25:09.902014  807704 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 00:25:09.902064  807704 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:25:09.912799  807704 system_svc.go:56] duration metric: took 10.774702ms WaitForService to wait for kubelet.
	I0813 00:25:09.912831  807704 kubeadm.go:547] duration metric: took 2.879864881s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 00:25:09.912853  807704 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:25:10.100345  807704 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 00:25:10.100373  807704 node_conditions.go:123] node cpu capacity is 8
	I0813 00:25:10.100388  807704 node_conditions.go:105] duration metric: took 187.529417ms to run NodePressure ...
	I0813 00:25:10.100400  807704 start.go:231] waiting for startup goroutines ...
	I0813 00:25:10.147604  807704 start.go:462] kubectl: 1.20.5, cluster: 1.17.3 (minor skew: 3)
	I0813 00:25:10.150026  807704 out.go:177] 
	W0813 00:25:10.150240  807704 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.17.3.
	I0813 00:25:10.152268  807704 out.go:177]   - Want kubectl v1.17.3? Try 'minikube kubectl -- get pods -A'
	I0813 00:25:10.154205  807704 out.go:177] * Done! kubectl is now configured to use "test-preload-20210813002243-676638" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 00:22:46 UTC, end at Fri 2021-08-13 00:25:11 UTC. --
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.295745590Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.301559730Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.304242652Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.307098363Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.316909717Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.316949765Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.316984578Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.322137746Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.324785845Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.327115672Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.337036584Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.337069616Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.337093742Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.337144881Z" level=warning msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.342123337Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.344834805Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.347291084Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.358181207Z" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.358225231Z" level=warning msg="Default CNI network name kindnet is unchangeable"
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.597205299Z" level=info msg="Stopping pod sandbox: c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3" id=00d1ba6e-72ef-43aa-a59d-23a11d1fe3df name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.597211428Z" level=info msg="Stopping pod sandbox: 1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53" id=b3382ab0-636e-4d76-9e23-dc6b484f7084 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.597214998Z" level=info msg="Stopping pod sandbox: 06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc" id=900c280f-316c-4cdb-b809-cbc319937e31 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.793197040Z" level=info msg="Stopped pod sandbox: 06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc" id=900c280f-316c-4cdb-b809-cbc319937e31 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.831529942Z" level=info msg="Stopped pod sandbox: 1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53" id=b3382ab0-636e-4d76-9e23-dc6b484f7084 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 13 00:25:03 test-preload-20210813002243-676638 crio[4499]: time="2021-08-13 00:25:03.852790364Z" level=info msg="Stopped pod sandbox: c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3" id=00d1ba6e-72ef-43aa-a59d-23a11d1fe3df name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID
	965aa52ca106b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     8 seconds ago        Running             storage-provisioner       1                   9963f3337ea1b
	5e817f60aebb0       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb                                     8 seconds ago        Running             kindnet-cni               1                   3d524154e8fb3
	0a2f3c870fcd6       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61                                     8 seconds ago        Running             coredns                   1                   8f07e8a035ebd
	5a55db8de3e43       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19                                     8 seconds ago        Running             kube-proxy                1                   db83a08f19dfb
	3b521c3432e80       90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b                                     12 seconds ago       Running             kube-apiserver            0                   267e775e10fdd
	f952f68efc975       b0f1517c1f4bb153597033d2efd81a9ac630e6a569307f993b2c0368afcf0302                                     12 seconds ago       Running             kube-controller-manager   0                   3dee299f25099
	da0f8d971f537       d109c0821a2b9225b69b99a95000df5cd1de5d606bc187b3620d730d7769c6ad                                     12 seconds ago       Running             kube-scheduler            0                   8a7d4a9a15b77
	2a0d3cbfade71       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                     13 seconds ago       Running             etcd                      1                   d04ecd72820f6
	1b3b5683f4a9f       70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61                                     42 seconds ago       Exited              coredns                   0                   8f07e8a035ebd
	2e4137d0d5a8e       docker.io/kindest/kindnetd@sha256:060b2c2951523b42490bae659c4a68989de84e013a7406fcce27b82f1a8c2bc1   58 seconds ago       Exited              kindnet-cni               0                   3d524154e8fb3
	ced11d540ac88       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     About a minute ago   Exited              storage-provisioner       0                   9963f3337ea1b
	e62240b4305b2       7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19                                     About a minute ago   Exited              kube-proxy                0                   db83a08f19dfb
	099efe330404d       0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2                                     About a minute ago   Exited              kube-apiserver            0                   c0ed23ec9f5bd
	93238e6cef498       5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056                                     About a minute ago   Exited              kube-controller-manager   0                   1234ecb4bec23
	a6144a75d0879       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                     About a minute ago   Exited              etcd                      0                   d04ecd72820f6
	2a5a4a41062ea       78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28                                     About a minute ago   Exited              kube-scheduler            0                   06b01061b5afd
	
	* 
	* ==> coredns [0a2f3c870fcd63cda1be32a3e149a862e20f57c7bc3a4f537e5ae86dabb3400a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = ef6277933dc1da9d32a131dbf5945040
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	
	* 
	* ==> coredns [1b3b5683f4a9ff053524367e87259b91d3dc153e616745696f2011092fb1d1df] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = ef6277933dc1da9d32a131dbf5945040
	CoreDNS-1.6.5
	linux/amd64, go1.13.4, c2fd1b2
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-20210813002243-676638
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-20210813002243-676638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=test-preload-20210813002243-676638
	                    minikube.k8s.io/updated_at=2021_08_13T00_23_51_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 00:23:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-20210813002243-676638
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 00:25:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 00:25:03 +0000   Fri, 13 Aug 2021 00:23:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 00:25:03 +0000   Fri, 13 Aug 2021 00:23:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 00:25:03 +0000   Fri, 13 Aug 2021 00:23:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 00:25:03 +0000   Fri, 13 Aug 2021 00:24:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    test-preload-20210813002243-676638
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 760e67beb8554645829f2357c8eb4ae7
	  System UUID:                9792f50b-3d67-4135-a941-04b5d5556bad
	  Boot ID:                    f12e4c71-5c79-4cb7-b9de-5d4c99f61cf1
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.17.3
	  Kube-Proxy Version:         v1.17.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6955765f44-fvjzm                                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     66s
	  kube-system                 etcd-test-preload-20210813002243-676638                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kindnet-xzvv2                                                 100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      66s
	  kube-system                 kube-apiserver-test-preload-20210813002243-676638             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kube-controller-manager-test-preload-20210813002243-676638    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kube-proxy-c4knf                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-scheduler-test-preload-20210813002243-676638             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                            Message
	  ----    ------                   ----               ----                                            -------
	  Normal  Starting                 81s                kubelet, test-preload-20210813002243-676638     Starting kubelet.
	  Normal  NodeHasSufficientMemory  81s                kubelet, test-preload-20210813002243-676638     Node test-preload-20210813002243-676638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s                kubelet, test-preload-20210813002243-676638     Node test-preload-20210813002243-676638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s                kubelet, test-preload-20210813002243-676638     Node test-preload-20210813002243-676638 status is now: NodeHasSufficientPID
	  Normal  NodeReady                71s                kubelet, test-preload-20210813002243-676638     Node test-preload-20210813002243-676638 status is now: NodeReady
	  Normal  Starting                 65s                kube-proxy, test-preload-20210813002243-676638  Starting kube-proxy.
	  Normal  Starting                 14s                kubelet, test-preload-20210813002243-676638     Starting kubelet.
	  Normal  NodeHasSufficientMemory  14s (x8 over 14s)  kubelet, test-preload-20210813002243-676638     Node test-preload-20210813002243-676638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet, test-preload-20210813002243-676638     Node test-preload-20210813002243-676638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x8 over 14s)  kubelet, test-preload-20210813002243-676638     Node test-preload-20210813002243-676638 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kube-proxy, test-preload-20210813002243-676638  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-8ebb9458ee27
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-8ebb9458ee27
	[  +0.000002] ll header: 00000000: 02 42 0b c8 e3 d4 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +0.000001] ll header: 00000000: 02 42 0b c8 e3 d4 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +0.000021] ll header: 00000000: 02 42 0b c8 e3 d4 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[  +8.191466] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-8ebb9458ee27
	[  +0.000028] ll header: 00000000: 02 42 0b c8 e3 d4 02 42 c0 a8 31 02 08 00        .B.....B..1...
	[Aug13 00:17] cgroup: cgroup2: unknown option "nsdelegate"
	[ +27.790145] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 00:18] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev vethff2492e2
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 21 20 86 f6 ab 08 06        .......! .....
	[Aug13 00:19] cgroup: cgroup2: unknown option "nsdelegate"
	[ +22.237260] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethee8e84b8
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 1e 68 44 71 a0 a0 08 06        .......hDq....
	[ +18.378629] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 00:20] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev vetha7477435
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff d2 19 e9 24 96 bd 08 06        .........$....
	[  +2.882202] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 00:22] cgroup: cgroup2: unknown option "nsdelegate"
	[Aug13 00:24] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8e cf 0d c1 9f 1e 08 06        ..............
	[  +0.000004] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 8e cf 0d c1 9f 1e 08 06        ..............
	[ +11.380185] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth1c58a79d
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 42 a5 54 47 c7 9b 08 06        ......B.TG....
	
	* 
	* ==> etcd [2a0d3cbfade71bcbaafdeff01ce8f0ae43d94ea1593a9dc43a758a8275a22ab8] <==
	* 2021-08-13 00:24:58.497780 I | embed: initial advertise peer URLs = https://192.168.49.2:2380
	2021-08-13 00:24:58.497787 I | embed: initial cluster = 
	2021-08-13 00:24:58.502997 I | etcdserver: restarting member aec36adc501070cc in cluster fa54960ea34d58be at commit index 458
	raft2021/08/13 00:24:58 INFO: aec36adc501070cc switched to configuration voters=()
	raft2021/08/13 00:24:58 INFO: aec36adc501070cc became follower at term 2
	raft2021/08/13 00:24:58 INFO: newRaft aec36adc501070cc [peers: [], term: 2, commit: 458, applied: 0, lastindex: 458, lastterm: 2]
	2021-08-13 00:24:58.507575 W | auth: simple token is not cryptographically signed
	2021-08-13 00:24:58.509620 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2021/08/13 00:24:58 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-13 00:24:58.510273 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-13 00:24:58.510429 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 00:24:58.510474 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 00:24:58.512200 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 00:24:58.512333 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-13 00:24:58.512424 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 00:24:59 INFO: aec36adc501070cc is starting a new election at term 2
	raft2021/08/13 00:24:59 INFO: aec36adc501070cc became candidate at term 3
	raft2021/08/13 00:24:59 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3
	raft2021/08/13 00:24:59 INFO: aec36adc501070cc became leader at term 3
	raft2021/08/13 00:24:59 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3
	2021-08-13 00:24:59.903952 I | etcdserver: published {Name:test-preload-20210813002243-676638 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-13 00:24:59.903978 I | embed: ready to serve client requests
	2021-08-13 00:24:59.904037 I | embed: ready to serve client requests
	2021-08-13 00:24:59.906185 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-13 00:24:59.906240 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> etcd [a6144a75d08798b823d12d57d635a39524409da42b1abc5e902f8bae12d391e9] <==
	* 2021-08-13 00:23:44.020375 W | auth: simple token is not cryptographically signed
	2021-08-13 00:23:44.090862 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2021-08-13 00:23:44.090979 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 00:23:44 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2021-08-13 00:23:44.091522 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-08-13 00:23:44.092915 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 00:23:44.093000 I | embed: listening for peers on 192.168.49.2:2380
	2021-08-13 00:23:44.093086 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 00:23:44 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/13 00:23:44 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/13 00:23:44 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/13 00:23:44 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/13 00:23:44 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-13 00:23:44.816546 I | etcdserver: published {Name:test-preload-20210813002243-676638 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-13 00:23:44.816627 I | embed: ready to serve client requests
	2021-08-13 00:23:44.816644 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 00:23:44.816767 I | embed: ready to serve client requests
	2021-08-13 00:23:44.817547 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 00:23:44.817626 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 00:23:44.820143 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 00:23:44.820227 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-13 00:23:51.757034 W | etcdserver: request "header:<ID:8128006928961540688 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kube-system/kindnet-token-bfkct\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-system/kindnet-token-bfkct\" value_size:2353 >> failure:<>>" with result "size:16" took too long (348.591535ms) to execute
	2021-08-13 00:23:51.757508 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" " with result "range_response_count:0 size:5" took too long (154.651055ms) to execute
	2021-08-13 00:24:09.655673 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-6955765f44-fvjzm\" " with result "range_response_count:1 size:1705" took too long (544.577166ms) to execute
	2021-08-13 00:24:09.656665 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (113.737559ms) to execute
	
	* 
	* ==> kernel <==
	*  00:25:11 up  4:07,  0 users,  load average: 1.45, 1.29, 1.53
	Linux test-preload-20210813002243-676638 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [099efe330404d5566ed4ca3f92b52774a0906317bf729a003a34c8636522b6fc] <==
	* I0813 00:23:47.737559       1 cache.go:39] Caches are synced for autoregister controller
	I0813 00:23:47.737559       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0813 00:23:47.737825       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 00:23:47.737842       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 00:23:47.738424       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
	I0813 00:23:48.636616       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0813 00:23:48.636646       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 00:23:48.636655       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 00:23:48.641016       1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
	I0813 00:23:48.643725       1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
	I0813 00:23:48.643741       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
	I0813 00:23:48.925664       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 00:23:48.956858       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 00:23:49.022688       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0813 00:23:49.023403       1 controller.go:606] quota admission added evaluator for: endpoints
	I0813 00:23:49.949925       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0813 00:23:50.627714       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0813 00:23:50.793459       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 00:23:50.904754       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0813 00:24:05.101219       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0813 00:24:05.472540       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0813 00:24:09.656059       1 trace.go:116] Trace[2048216430]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2021-08-13 00:24:09.012323457 +0000 UTC m=+25.016013183) (total time: 643.692838ms):
	Trace[2048216430]: [643.658344ms] [642.200311ms] Transaction committed
	I0813 00:24:09.656474       1 trace.go:116] Trace[1518365262]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-6955765f44-fvjzm,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.1 (started: 2021-08-13 00:24:09.110629631 +0000 UTC m=+25.114319342) (total time: 545.813828ms):
	Trace[1518365262]: [545.64384ms] [545.63364ms] About to write a response
	
	* 
	* ==> kube-apiserver [3b521c3432e8013138d10724c16b0f71892f410f29653d26845a3eae04d1ccf8] <==
	* I0813 00:25:02.354163       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0813 00:25:02.354141       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0813 00:25:02.354169       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
	I0813 00:25:02.354145       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0813 00:25:02.354069       1 naming_controller.go:288] Starting NamingConditionController
	I0813 00:25:02.354544       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0813 00:25:02.354620       1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
	I0813 00:25:02.354722       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0813 00:25:02.354781       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0813 00:25:02.400435       1 controller.go:151] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0813 00:25:02.404038       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 00:25:02.489527       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0813 00:25:02.489593       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
	I0813 00:25:02.489631       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0813 00:25:02.489718       1 cache.go:39] Caches are synced for autoregister controller
	I0813 00:25:02.489551       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0813 00:25:03.353368       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0813 00:25:03.353398       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 00:25:03.353419       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 00:25:03.357845       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
	I0813 00:25:04.091569       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0813 00:25:04.188009       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0813 00:25:04.208686       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0813 00:25:04.261314       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 00:25:04.266900       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [93238e6cef49886ac2a89e6d443b6d4f401750c66ca00e40674719670485ae18] <==
	* E0813 00:24:55.776121       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=361&timeout=9m32s&timeoutSeconds=572&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776135       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.VolumeAttachment: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=1&timeout=7m44s&timeoutSeconds=464&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776162       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=201&timeout=9m35s&timeoutSeconds=575&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776173       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://control-plane.minikube.internal:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=1&timeout=9m32s&timeoutSeconds=572&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776192       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=5m42s&timeoutSeconds=342&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776211       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=392&timeout=9m3s&timeoutSeconds=543&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776333       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: Get https://control-plane.minikube.internal:8443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=327&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776376       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.MutatingWebhookConfiguration: Get https://control-plane.minikube.internal:8443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=5m31s&timeoutSeconds=331&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776421       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=1&timeout=9m21s&timeoutSeconds=561&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776423       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=7m45s&timeoutSeconds=465&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776440       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=431&timeout=6m20s&timeoutSeconds=380&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776454       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://control-plane.minikube.internal:8443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=1&timeout=5m28s&timeoutSeconds=328&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776466       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Event: Get https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1beta1/events?allowWatchBookmarks=true&resourceVersion=408&timeout=7m51s&timeoutSeconds=471&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776476       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=418&timeout=6m35s&timeoutSeconds=395&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776507       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=416&timeout=6m24s&timeoutSeconds=384&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776543       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776552       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=434&timeout=7m59s&timeoutSeconds=479&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776558       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=8m55s&timeoutSeconds=535&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776567       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=370&timeout=7m28s&timeoutSeconds=448&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776881       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://control-plane.minikube.internal:8443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1&timeout=8m21s&timeoutSeconds=501&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776937       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://control-plane.minikube.internal:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=6m21s&timeoutSeconds=381&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776942       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=369&timeout=5m43s&timeoutSeconds=343&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776981       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PriorityClass: Get https://control-plane.minikube.internal:8443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=44&timeout=7m42s&timeoutSeconds=462&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.777258       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://control-plane.minikube.internal:8443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=1&timeout=6m27s&timeoutSeconds=387&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	
	* 
	* ==> kube-controller-manager [f952f68efc97506d84f9949edc6f0d9293095817fe9cfd1f53f4ee1bbcc55a2b] <==
	* I0813 00:24:59.592264       1 tlsconfig.go:219] Starting DynamicServingCertificateController
	I0813 00:24:59.592539       1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
	I0813 00:25:04.647249       1 plugins.go:100] No cloud provider specified.
	I0813 00:25:04.648252       1 shared_informer.go:197] Waiting for caches to sync for tokens
	I0813 00:25:04.654663       1 controllermanager.go:533] Started "ttl"
	I0813 00:25:04.654802       1 ttl_controller.go:116] Starting TTL controller
	I0813 00:25:04.654818       1 shared_informer.go:197] Waiting for caches to sync for TTL
	I0813 00:25:04.660545       1 controllermanager.go:533] Started "tokencleaner"
	I0813 00:25:04.660745       1 tokencleaner.go:117] Starting token cleaner controller
	I0813 00:25:04.660764       1 shared_informer.go:197] Waiting for caches to sync for token_cleaner
	I0813 00:25:04.660774       1 shared_informer.go:204] Caches are synced for token_cleaner 
	I0813 00:25:04.666618       1 controllermanager.go:533] Started "csrsigning"
	I0813 00:25:04.666774       1 certificate_controller.go:118] Starting certificate controller "csrsigning"
	I0813 00:25:04.666791       1 shared_informer.go:197] Waiting for caches to sync for certificate-csrsigning
	I0813 00:25:04.679455       1 controllermanager.go:533] Started "namespace"
	I0813 00:25:04.679515       1 namespace_controller.go:200] Starting namespace controller
	I0813 00:25:04.679526       1 shared_informer.go:197] Waiting for caches to sync for namespace
	I0813 00:25:04.684671       1 controllermanager.go:533] Started "job"
	I0813 00:25:04.684691       1 job_controller.go:143] Starting job controller
	I0813 00:25:04.684702       1 shared_informer.go:197] Waiting for caches to sync for job
	I0813 00:25:04.690343       1 controllermanager.go:533] Started "deployment"
	I0813 00:25:04.690506       1 deployment_controller.go:152] Starting deployment controller
	I0813 00:25:04.690520       1 shared_informer.go:197] Waiting for caches to sync for deployment
	I0813 00:25:04.695525       1 node_ipam_controller.go:94] Sending events to api server.
	I0813 00:25:04.748462       1 shared_informer.go:204] Caches are synced for tokens 
	
	* 
	* ==> kube-proxy [5a55db8de3e436078d2dc95ee582f622237ac75764bb91d9c7ff18efa4ffca73] <==
	* W0813 00:25:03.001124       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0813 00:25:03.008835       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I0813 00:25:03.008878       1 server_others.go:145] Using iptables Proxier.
	I0813 00:25:03.009194       1 server.go:571] Version: v1.17.0
	I0813 00:25:03.009971       1 config.go:131] Starting endpoints config controller
	I0813 00:25:03.010015       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0813 00:25:03.010050       1 config.go:313] Starting service config controller
	I0813 00:25:03.010061       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0813 00:25:03.110195       1 shared_informer.go:204] Caches are synced for service config 
	I0813 00:25:03.110195       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-proxy [e62240b4305b268ea5c6bf1f0fc86a635a19354c96d3cf79311eca79dfda55ca] <==
	* W0813 00:24:06.192773       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
	I0813 00:24:06.201288       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I0813 00:24:06.201334       1 server_others.go:145] Using iptables Proxier.
	I0813 00:24:06.201708       1 server.go:571] Version: v1.17.0
	I0813 00:24:06.203064       1 config.go:131] Starting endpoints config controller
	I0813 00:24:06.203099       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0813 00:24:06.203133       1 config.go:313] Starting service config controller
	I0813 00:24:06.203138       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0813 00:24:06.303590       1 shared_informer.go:204] Caches are synced for service config 
	I0813 00:24:06.304878       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [2a5a4a41062eaa52c37a212a8b205aef44a432dae42172939cdfe77658bf13de] <==
	* E0813 00:23:48.710383       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 00:23:48.711091       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:23:48.712286       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 00:23:48.713263       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 00:23:48.714331       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 00:23:48.715403       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 00:23:48.716436       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 00:23:48.717685       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 00:23:48.718771       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 00:23:48.719888       1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 00:23:48.720985       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 00:23:48.722133       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0813 00:23:49.804454       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0813 00:24:55.775262       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.775352       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=201&timeout=9m56s&timeoutSeconds=596&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.775285       1 reflector.go:320] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&resourceVersion=416&timeoutSeconds=374&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.775480       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=418&timeout=5m18s&timeoutSeconds=318&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	W0813 00:24:55.775577       1 reflector.go:340] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: very short watch: k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Unexpected watch close - watch lasted less than a second and no items received
	E0813 00:24:55.775705       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=42&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.775718       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=432&timeout=5m24s&timeoutSeconds=324&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.775911       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=8m33s&timeoutSeconds=513&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.775933       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=8m17s&timeoutSeconds=497&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776033       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=6m18s&timeoutSeconds=378&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	E0813 00:24:55.776677       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=370&timeout=5m19s&timeoutSeconds=319&watch=true: dial tcp 192.168.49.2:8443: connect: connection refused
	W0813 00:24:55.776750       1 reflector.go:340] k8s.io/client-go/informers/factory.go:135: watch of *v1beta1.PodDisruptionBudget ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received
	
	* 
	* ==> kube-scheduler [da0f8d971f53744b807a7590a4afe55fbc8fca261b682596832020e1e69beb30] <==
	* I0813 00:24:59.239465       1 serving.go:312] Generated self-signed cert in-memory
	W0813 00:24:59.593580       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0813 00:24:59.593727       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
	W0813 00:25:02.370143       1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 00:25:02.370266       1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 00:25:02.370308       1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 00:25:02.370339       1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	W0813 00:25:02.405073       1 authorization.go:47] Authorization is disabled
	W0813 00:25:02.405095       1 authentication.go:92] Authentication is disabled
	I0813 00:25:02.405109       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0813 00:25:02.406793       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 00:25:02.406841       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 00:25:02.407723       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0813 00:25:02.407847       1 tlsconfig.go:219] Starting DynamicServingCertificateController
	I0813 00:25:02.507108       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 00:22:46 UTC, end at Fri 2021-08-13 00:25:12 UTC. --
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.404722    6509 kubelet.go:1645] Trying to delete pod kube-controller-manager-test-preload-20210813002243-676638_kube-system 9900ebb0-0054-4b1c-86a4-944e0173a789
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.491347    6509 kuberuntime_manager.go:981] updating runtime config through cri with podcidr 10.244.0.0/24
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.492320    6509 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: W0813 00:25:02.498807    6509 status_manager.go:546] Failed to update status for pod "kube-scheduler-test-preload-20210813002243-676638_kube-system(a02e5f47-b4dd-4907-8e2c-89d3b678faf9)": failed to patch status "{\"metadata\":{\"uid\":\"a02e5f47-b4dd-4907-8e2c-89d3b678faf9\"},\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"Initialized\"},{\"type\":\"Ready\"},{\"type\":\"ContainersReady\"},{\"type\":\"PodScheduled\"}],\"conditions\":[{\"lastTransitionTime\":\"2021-08-13T00:24:57Z\",\"type\":\"Initialized\"},{\"lastTransitionTime\":\"2021-08-13T00:24:57Z\",\"message\":\"containers with unready status: [kube-scheduler]\",\"reason\":\"ContainersNotReady\",\"status\":\"False\",\"type\":\"Ready\"},{\"lastTransitionTime\":\"2021-08-13T00:24:57Z\",\"message\":\"containers with unready status: [kube-scheduler]\",\"reason\":\"ContainersNotReady\",\"status\":\"False\",\"type\":\"ContainersReady\"},{\"lastTransitionTime\":\"2021-08-13T00:
24:57Z\",\"type\":\"PodScheduled\"}],\"containerStatuses\":[{\"image\":\"k8s.gcr.io/kube-scheduler:v1.17.3\",\"imageID\":\"\",\"lastState\":{},\"name\":\"kube-scheduler\",\"ready\":false,\"restartCount\":0,\"started\":false,\"state\":{\"waiting\":{\"reason\":\"ContainerCreating\"}}}],\"phase\":\"Pending\",\"podIPs\":null,\"startTime\":\"2021-08-13T00:24:57Z\"}}" for pod "kube-system"/"kube-scheduler-test-preload-20210813002243-676638": pods "kube-scheduler-test-preload-20210813002243-676638" not found
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: W0813 00:25:02.499379    6509 kubelet.go:1649] Deleted mirror pod "kube-scheduler-test-preload-20210813002243-676638_kube-system(a02e5f47-b4dd-4907-8e2c-89d3b678faf9)" because it is outdated
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: W0813 00:25:02.499775    6509 kubelet.go:1649] Deleted mirror pod "kube-apiserver-test-preload-20210813002243-676638_kube-system(a6bee743-08f1-4a6b-a458-0af4ee5c9119)" because it is outdated
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: W0813 00:25:02.499936    6509 kubelet.go:1649] Deleted mirror pod "kube-controller-manager-test-preload-20210813002243-676638_kube-system(9900ebb0-0054-4b1c-86a4-944e0173a789)" because it is outdated
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.500970    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/623a45e5-f85f-4e51-a632-2053eaa19cd6-kube-proxy") pod "kube-proxy-c4knf" (UID: "623a45e5-f85f-4e51-a632-2053eaa19cd6")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501019    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/623a45e5-f85f-4e51-a632-2053eaa19cd6-xtables-lock") pod "kube-proxy-c4knf" (UID: "623a45e5-f85f-4e51-a632-2053eaa19cd6")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501091    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-lzch4" (UniqueName: "kubernetes.io/secret/623a45e5-f85f-4e51-a632-2053eaa19cd6-kube-proxy-token-lzch4") pod "kube-proxy-c4knf" (UID: "623a45e5-f85f-4e51-a632-2053eaa19cd6")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501149    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/d33fcc36-e797-4acb-862f-81982ea3bffa-cni-cfg") pod "kindnet-xzvv2" (UID: "d33fcc36-e797-4acb-862f-81982ea3bffa")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501185    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f2d85412-2403-46f6-a704-4513ff9bcfa6-tmp") pod "storage-provisioner" (UID: "f2d85412-2403-46f6-a704-4513ff9bcfa6")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501275    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-bfkct" (UniqueName: "kubernetes.io/secret/d33fcc36-e797-4acb-862f-81982ea3bffa-kindnet-token-bfkct") pod "kindnet-xzvv2" (UID: "d33fcc36-e797-4acb-862f-81982ea3bffa")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501322    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-d7wbl" (UniqueName: "kubernetes.io/secret/f2d85412-2403-46f6-a704-4513ff9bcfa6-storage-provisioner-token-d7wbl") pod "storage-provisioner" (UID: "f2d85412-2403-46f6-a704-4513ff9bcfa6")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501370    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/623a45e5-f85f-4e51-a632-2053eaa19cd6-lib-modules") pod "kube-proxy-c4knf" (UID: "623a45e5-f85f-4e51-a632-2053eaa19cd6")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501396    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/40a844df-90e2-4539-a9c0-ff1b20374ebf-config-volume") pod "coredns-6955765f44-fvjzm" (UID: "40a844df-90e2-4539-a9c0-ff1b20374ebf")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501421    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-4sxrr" (UniqueName: "kubernetes.io/secret/40a844df-90e2-4539-a9c0-ff1b20374ebf-coredns-token-4sxrr") pod "coredns-6955765f44-fvjzm" (UID: "40a844df-90e2-4539-a9c0-ff1b20374ebf")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501446    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/d33fcc36-e797-4acb-862f-81982ea3bffa-xtables-lock") pod "kindnet-xzvv2" (UID: "d33fcc36-e797-4acb-862f-81982ea3bffa")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501475    6509 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/d33fcc36-e797-4acb-862f-81982ea3bffa-lib-modules") pod "kindnet-xzvv2" (UID: "d33fcc36-e797-4acb-862f-81982ea3bffa")
	Aug 13 00:25:02 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:02.501559    6509 reconciler.go:156] Reconciler: start to sync state
	Aug 13 00:25:03 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:03.716418    6509 kubelet_node_status.go:112] Node test-preload-20210813002243-676638 was previously registered
	Aug 13 00:25:03 test-preload-20210813002243-676638 kubelet[6509]: I0813 00:25:03.716544    6509 kubelet_node_status.go:73] Successfully registered node test-preload-20210813002243-676638
	Aug 13 00:25:04 test-preload-20210813002243-676638 kubelet[6509]: W0813 00:25:04.638973    6509 pod_container_deletor.go:75] Container "06b01061b5afd50977f0f1cf3a696ed318947882c403d0ac8149be21d1b125fc" not found in pod's containers
	Aug 13 00:25:04 test-preload-20210813002243-676638 kubelet[6509]: W0813 00:25:04.640059    6509 pod_container_deletor.go:75] Container "c0ed23ec9f5bde6d89a59994134c2e4c16453739fdae950be06d2e67faf1d7a3" not found in pod's containers
	Aug 13 00:25:04 test-preload-20210813002243-676638 kubelet[6509]: W0813 00:25:04.641028    6509 pod_container_deletor.go:75] Container "1234ecb4bec235cdb94a257d1edbbf18f99f7bb9fc87660bb73ffbf23c3d4c53" not found in pod's containers
	
	* 
	* ==> storage-provisioner [965aa52ca106b6c402fbf366c0fa85423f7b16064c2f4801857189ae575bd2f2] <==
	* I0813 00:25:03.219404       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 00:25:03.226895       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 00:25:03.226946       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [ced11d540ac880d0f874487cbfd54fa7498fcc42b4cc0fb1c1e8e9c2bf31abea] <==
	* I0813 00:24:10.033950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 00:24:10.041852       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 00:24:10.041906       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 00:24:10.047555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 00:24:10.047741       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-20210813002243-676638_1c298b63-aefa-4df4-8c44-0eabcdf3c03e!
	I0813 00:24:10.048802       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59116571-73ef-40d2-b36e-ad3925b810a7", APIVersion:"v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-20210813002243-676638_1c298b63-aefa-4df4-8c44-0eabcdf3c03e became leader
	I0813 00:24:10.147884       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-20210813002243-676638_1c298b63-aefa-4df4-8c44-0eabcdf3c03e!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-20210813002243-676638 -n test-preload-20210813002243-676638
helpers_test.go:262: (dbg) Run:  kubectl --context test-preload-20210813002243-676638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context test-preload-20210813002243-676638 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context test-preload-20210813002243-676638 describe pod : exit status 1 (52.294562ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context test-preload-20210813002243-676638 describe pod : exit status 1
helpers_test.go:176: Cleaning up "test-preload-20210813002243-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210813002243-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210813002243-676638: (3.179452464s)
--- FAIL: TestPreload (152.69s)

                                                
                                    
x
+
TestScheduledStopUnix (70.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210813002516-676638 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210813002516-676638 --memory=2048 --driver=docker  --container-runtime=crio: (28.431467901s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813002516-676638 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210813002516-676638 -n scheduled-stop-20210813002516-676638
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813002516-676638 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813002516-676638 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813002516-676638 -n scheduled-stop-20210813002516-676638
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813002516-676638
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210813002516-676638 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210813002516-676638
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210813002516-676638: exit status 3 (3.323554452s)

                                                
                                                
-- stdout --
	scheduled-stop-20210813002516-676638
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 00:26:16.478194  816869 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52040->127.0.0.1:33348: read: connection reset by peer
	E0813 00:26:16.478248  816869 status.go:258] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52040->127.0.0.1:33348: read: connection reset by peer

                                                
                                                
** /stderr **
scheduled_stop_test.go:209: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20210813002516-676638
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 00:26:16.478194  816869 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52040->127.0.0.1:33348: read: connection reset by peer
	E0813 00:26:16.478248  816869 status.go:258] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52040->127.0.0.1:33348: read: connection reset by peer

                                                
                                                
** /stderr **
panic.go:613: *** TestScheduledStopUnix FAILED at 2021-08-13 00:26:16.480565388 +0000 UTC m=+1895.092561754
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect scheduled-stop-20210813002516-676638
helpers_test.go:236: (dbg) docker inspect scheduled-stop-20210813002516-676638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d9711dee00d4046d65ff384336c3886eedcc9d3cb70219f4fdfba95792ab16e5",
	        "Created": "2021-08-13T00:25:17.825887929Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 813134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T00:25:18.295065209Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/d9711dee00d4046d65ff384336c3886eedcc9d3cb70219f4fdfba95792ab16e5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d9711dee00d4046d65ff384336c3886eedcc9d3cb70219f4fdfba95792ab16e5/hostname",
	        "HostsPath": "/var/lib/docker/containers/d9711dee00d4046d65ff384336c3886eedcc9d3cb70219f4fdfba95792ab16e5/hosts",
	        "LogPath": "/var/lib/docker/containers/d9711dee00d4046d65ff384336c3886eedcc9d3cb70219f4fdfba95792ab16e5/d9711dee00d4046d65ff384336c3886eedcc9d3cb70219f4fdfba95792ab16e5-json.log",
	        "Name": "/scheduled-stop-20210813002516-676638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20210813002516-676638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20210813002516-676638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/732c1976467f84f45fc4bde4bbb245430935627779bb17f13c40b1c55bd5b5b2-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/732c1976467f84f45fc4bde4bbb245430935627779bb17f13c40b1c55bd5b5b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/732c1976467f84f45fc4bde4bbb245430935627779bb17f13c40b1c55bd5b5b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/732c1976467f84f45fc4bde4bbb245430935627779bb17f13c40b1c55bd5b5b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20210813002516-676638",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20210813002516-676638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20210813002516-676638",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20210813002516-676638",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20210813002516-676638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7951b15c20bcd7200c7b82d5aa6e9a0cfa84c2b0abbf5698e81d6dc44cdbfb92",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33347"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33344"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33346"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33345"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7951b15c20bc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20210813002516-676638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d9711dee00d4"
	                    ],
	                    "NetworkID": "d1db44c1cdbec761e27558541fe64ec267981bf0229d703d1b9b046d56934231",
	                    "EndpointID": "d4785aa7ad396d8c87d10ef5d6e805e34b635ca1b26c31d90fccc333ddbfe942",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813002516-676638 -n scheduled-stop-20210813002516-676638
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210813002516-676638 -n scheduled-stop-20210813002516-676638: exit status 3 (3.313616032s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 00:26:19.834051  816969 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52078->127.0.0.1:33348: read: connection reset by peer
	E0813 00:26:19.834074  816969 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52078->127.0.0.1:33348: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "scheduled-stop-20210813002516-676638" host is not running, skipping log retrieval (state="Error")
helpers_test.go:176: Cleaning up "scheduled-stop-20210813002516-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210813002516-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20210813002516-676638: (7.269540753s)
--- FAIL: TestScheduledStopUnix (70.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (151.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.9.0.747871445.exe start -p running-upgrade-20210813002835-676638 --memory=2200 --vm-driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.9.0.747871445.exe start -p running-upgrade-20210813002835-676638 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m28.299296243s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210813002835-676638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-20210813002835-676638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (58.896272585s)

                                                
                                                
-- stdout --
	* [running-upgrade-20210813002835-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	* Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node running-upgrade-20210813002835-676638 in cluster running-upgrade-20210813002835-676638
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20210813002835-676638" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:30:04.484097  869787 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:30:04.484210  869787 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:30:04.484215  869787 out.go:311] Setting ErrFile to fd 2...
	I0813 00:30:04.484220  869787 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:30:04.484393  869787 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:30:04.484770  869787 out.go:305] Setting JSON to false
	I0813 00:30:04.530574  869787 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":15166,"bootTime":1628799438,"procs":314,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:30:04.530739  869787 start.go:121] virtualization: kvm guest
	I0813 00:30:04.533780  869787 out.go:177] * [running-upgrade-20210813002835-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:30:04.533945  869787 notify.go:169] Checking for updates...
	I0813 00:30:04.536313  869787 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:30:04.538009  869787 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:30:04.539658  869787 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:30:04.542279  869787 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:30:04.542896  869787 start_flags.go:521] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 00:30:04.545333  869787 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0813 00:30:04.545397  869787 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:30:04.607796  869787 docker.go:132] docker version: linux-19.03.15
	I0813 00:30:04.607918  869787 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:30:04.719612  869787 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:73 SystemTime:2021-08-13 00:30:04.664465898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:30:04.719701  869787 docker.go:244] overlay module found
	I0813 00:30:04.722213  869787 out.go:177] * Using the docker driver based on existing profile
	I0813 00:30:04.722243  869787 start.go:278] selected driver: docker
	I0813 00:30:04.722251  869787 start.go:751] validating driver "docker" against &{Name:running-upgrade-20210813002835-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210813002835-676638 Namespace: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:30:04.722335  869787 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 00:30:04.722373  869787 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:30:04.722395  869787 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 00:30:04.723707  869787 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:30:04.724606  869787 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:30:04.821356  869787 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:72 SystemTime:2021-08-13 00:30:04.764839545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 00:30:04.821505  869787 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:30:04.821534  869787 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 00:30:04.823990  869787 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:30:04.824104  869787 cni.go:93] Creating CNI manager for ""
	I0813 00:30:04.824122  869787 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0813 00:30:04.824133  869787 start_flags.go:277] config:
	{Name:running-upgrade-20210813002835-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20210813002835-676638 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:30:04.826201  869787 out.go:177] * Starting control plane node running-upgrade-20210813002835-676638 in cluster running-upgrade-20210813002835-676638
	I0813 00:30:04.826248  869787 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 00:30:04.827671  869787 out.go:177] * Pulling base image ...
	I0813 00:30:04.827701  869787 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0813 00:30:04.827812  869787 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	W0813 00:30:04.882888  869787 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0813 00:30:04.883697  869787 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/running-upgrade-20210813002835-676638/config.json ...
	I0813 00:30:04.884168  869787 cache.go:108] acquiring lock: {Name:mkebf5e5183b3fe1832480b10a0767c0216ef0fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:30:04.884414  869787 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 00:30:04.884503  869787 cache.go:108] acquiring lock: {Name:mk7e91618de7d10682336f6b0a703a9b3cc6d7c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:30:04.884577  869787 cache.go:108] acquiring lock: {Name:mk083d4b02aa9d0f9be1d05f8e6c4194cbc5f200 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:30:04.884746  869787 cache.go:108] acquiring lock: {Name:mkd01ce055fc376d1e00625138b7b37ece1a1361 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:30:04.884807  869787 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 00:30:04.884829  869787 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 300.649µs
	I0813 00:30:04.884853  869787 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 00:30:04.884630  869787 cache.go:108] acquiring lock: {Name:mk2f8f4570a6c789373dfd171ca45edd03578901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:30:04.884873  869787 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 00:30:04.884897  869787 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 150.999µs
	I0813 00:30:04.884916  869787 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 00:30:04.884902  869787 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
	I0813 00:30:04.884921  869787 cache.go:108] acquiring lock: {Name:mk93fe9de025e2f7d4c900d8f720f3f08473ba0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:30:04.884949  869787 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 517.347µs
	I0813 00:30:04.884973  869787 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
	I0813 00:30:04.884976  869787 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
	I0813 00:30:04.884711  869787 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0813 00:30:04.885013  869787 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 94.544µs
	I0813 00:30:04.885029  869787 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
	I0813 00:30:04.885015  869787 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 519.378µs
	I0813 00:30:04.884995  869787 cache.go:108] acquiring lock: {Name:mkaad6b66e2dc046b4e0ed6a834b4d5311810374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:30:04.885042  869787 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0813 00:30:04.884444  869787 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 279.186µs
	I0813 00:30:04.885061  869787 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 00:30:04.884673  869787 cache.go:108] acquiring lock: {Name:mk86691a35a6e63905cd7f5eb05f6d130611ae9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:30:04.885083  869787 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
	I0813 00:30:04.885099  869787 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 108.202µs
	I0813 00:30:04.885112  869787 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
	I0813 00:30:04.884722  869787 cache.go:108] acquiring lock: {Name:mk9074d385c3911880d5dcb8de8278c6b7e76cfa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:30:04.885121  869787 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
	I0813 00:30:04.885148  869787 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
	I0813 00:30:04.885136  869787 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 475.054µs
	I0813 00:30:04.885188  869787 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
	I0813 00:30:04.885173  869787 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 452.047µs
	I0813 00:30:04.885207  869787 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
	I0813 00:30:04.885421  869787 cache.go:108] acquiring lock: {Name:mk36aa2c825ea6a2262161b88564609f4d60e208 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:30:04.885761  869787 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0813 00:30:04.885781  869787 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 1.367258ms
	I0813 00:30:04.885803  869787 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0813 00:30:04.885820  869787 cache.go:88] Successfully saved all images to host disk.
	I0813 00:30:04.948033  869787 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 00:30:04.948067  869787 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 00:30:04.948089  869787 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:30:04.948153  869787 start.go:313] acquiring machines lock for running-upgrade-20210813002835-676638: {Name:mkb7fd2b8a7a16158fa21a72800d55e71929a50c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:30:04.948309  869787 start.go:317] acquired machines lock for "running-upgrade-20210813002835-676638" in 125.955µs
	I0813 00:30:04.948337  869787 start.go:93] Skipping create...Using existing machine configuration
	I0813 00:30:04.948347  869787 fix.go:55] fixHost starting: m01
	I0813 00:30:04.948620  869787 cli_runner.go:115] Run: docker container inspect running-upgrade-20210813002835-676638 --format={{.State.Status}}
	I0813 00:30:04.995365  869787 fix.go:108] recreateIfNeeded on running-upgrade-20210813002835-676638: state=Running err=<nil>
	W0813 00:30:04.995422  869787 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 00:30:04.998342  869787 out.go:177] * Updating the running docker "running-upgrade-20210813002835-676638" container ...
	I0813 00:30:04.998392  869787 machine.go:88] provisioning docker machine ...
	I0813 00:30:04.998420  869787 ubuntu.go:169] provisioning hostname "running-upgrade-20210813002835-676638"
	I0813 00:30:04.998494  869787 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813002835-676638
	I0813 00:30:05.045599  869787 main.go:130] libmachine: Using SSH client type: native
	I0813 00:30:05.045844  869787 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33380 <nil> <nil>}
	I0813 00:30:05.045871  869787 main.go:130] libmachine: About to run SSH command:
	sudo hostname running-upgrade-20210813002835-676638 && echo "running-upgrade-20210813002835-676638" | sudo tee /etc/hostname
	I0813 00:30:05.167079  869787 main.go:130] libmachine: SSH cmd err, output: <nil>: running-upgrade-20210813002835-676638
	
	I0813 00:30:05.167220  869787 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813002835-676638
	I0813 00:30:05.212486  869787 main.go:130] libmachine: Using SSH client type: native
	I0813 00:30:05.212763  869787 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33380 <nil> <nil>}
	I0813 00:30:05.212799  869787 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-20210813002835-676638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20210813002835-676638/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-20210813002835-676638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:30:05.329362  869787 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:30:05.329399  869787 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:30:05.329470  869787 ubuntu.go:177] setting up certificates
	I0813 00:30:05.329484  869787 provision.go:83] configureAuth start
	I0813 00:30:05.329555  869787 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20210813002835-676638
	I0813 00:30:05.373589  869787 provision.go:137] copyHostCerts
	I0813 00:30:05.373673  869787 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:30:05.373690  869787 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:30:05.373753  869787 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1082 bytes)
	I0813 00:30:05.373875  869787 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:30:05.373891  869787 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:30:05.373917  869787 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:30:05.373982  869787 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:30:05.373992  869787 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:30:05.374015  869787 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0813 00:30:05.374074  869787 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20210813002835-676638 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-20210813002835-676638]
	I0813 00:30:05.648053  869787 provision.go:171] copyRemoteCerts
	I0813 00:30:05.648129  869787 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:30:05.648205  869787 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813002835-676638
	I0813 00:30:05.692217  869787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33380 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/running-upgrade-20210813002835-676638/id_rsa Username:docker}
	I0813 00:30:05.778761  869787 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 00:30:05.826309  869787 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0813 00:30:05.845485  869787 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 00:30:05.863895  869787 provision.go:86] duration metric: configureAuth took 534.394074ms
	I0813 00:30:05.863926  869787 ubuntu.go:193] setting minikube options for container-runtime
	I0813 00:30:05.864281  869787 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813002835-676638
	I0813 00:30:05.908566  869787 main.go:130] libmachine: Using SSH client type: native
	I0813 00:30:05.908781  869787 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33380 <nil> <nil>}
	I0813 00:30:05.908807  869787 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:30:06.384039  869787 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:30:06.384071  869787 machine.go:91] provisioned docker machine in 1.385669946s
	I0813 00:30:06.384083  869787 start.go:267] post-start starting for "running-upgrade-20210813002835-676638" (driver="docker")
	I0813 00:30:06.384091  869787 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:30:06.384157  869787 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:30:06.384216  869787 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813002835-676638
	I0813 00:30:06.432547  869787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33380 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/running-upgrade-20210813002835-676638/id_rsa Username:docker}
	I0813 00:30:06.518252  869787 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:30:06.522096  869787 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 00:30:06.522126  869787 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 00:30:06.522137  869787 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 00:30:06.522143  869787 info.go:137] Remote host: Ubuntu 19.10
	I0813 00:30:06.522154  869787 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:30:06.522203  869787 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:30:06.522306  869787 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> 6766382.pem in /etc/ssl/certs
	I0813 00:30:06.522412  869787 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:30:06.554340  869787 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:30:06.594815  869787 start.go:270] post-start completed in 210.70536ms
	I0813 00:30:06.594919  869787 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:30:06.594976  869787 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813002835-676638
	I0813 00:30:06.664660  869787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33380 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/running-upgrade-20210813002835-676638/id_rsa Username:docker}
	I0813 00:30:06.747826  869787 fix.go:57] fixHost completed within 1.799467542s
	I0813 00:30:06.747871  869787 start.go:80] releasing machines lock for "running-upgrade-20210813002835-676638", held for 1.79954732s
	I0813 00:30:06.747987  869787 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20210813002835-676638
	I0813 00:30:06.799928  869787 ssh_runner.go:149] Run: systemctl --version
	I0813 00:30:06.799989  869787 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813002835-676638
	I0813 00:30:06.800158  869787 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:30:06.800232  869787 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20210813002835-676638
	I0813 00:30:06.859655  869787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33380 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/running-upgrade-20210813002835-676638/id_rsa Username:docker}
	I0813 00:30:06.863229  869787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33380 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/running-upgrade-20210813002835-676638/id_rsa Username:docker}
	I0813 00:30:06.976327  869787 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:30:07.014914  869787 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:30:07.052525  869787 docker.go:153] disabling docker service ...
	I0813 00:30:07.052589  869787 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:30:07.073935  869787 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:30:07.088524  869787 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:30:07.222643  869787 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:30:07.310325  869787 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:30:07.323580  869787 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:30:07.342117  869787 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
	I0813 00:30:07.355406  869787 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:30:07.366334  869787 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:30:07.366406  869787 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:30:07.375891  869787 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:30:07.383490  869787 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:30:07.451859  869787 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:30:07.543185  869787 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:30:07.543258  869787 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:30:07.547465  869787 retry.go:31] will retry after 1.104660288s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:30:08.652828  869787 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:30:08.656581  869787 retry.go:31] will retry after 2.160763633s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:30:10.818869  869787 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:30:10.823345  869787 retry.go:31] will retry after 2.62026012s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:30:13.443760  869787 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:30:13.447445  869787 retry.go:31] will retry after 3.164785382s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:30:16.613386  869787 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:30:16.617070  869787 retry.go:31] will retry after 4.680977329s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:30:21.298813  869787 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:30:21.303076  869787 retry.go:31] will retry after 9.01243771s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:30:30.315732  869787 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:30:30.319431  869787 retry.go:31] will retry after 6.442959172s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:30:36.763185  869787 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:30:36.767016  869787 retry.go:31] will retry after 11.217246954s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:30:47.985415  869787 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:30:47.993242  869787 retry.go:31] will retry after 15.299675834s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:31:03.294350  869787 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:31:03.302232  869787 out.go:177] 
	W0813 00:31:03.302450  869787 out.go:242] X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	W0813 00:31:03.302473  869787 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 00:31:03.307320  869787 out.go:242] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 00:31:03.309785  869787 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:140: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-20210813002835-676638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:613: *** TestRunningBinaryUpgrade FAILED at 2021-08-13 00:31:03.33375258 +0000 UTC m=+2181.945749005
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect running-upgrade-20210813002835-676638
helpers_test.go:236: (dbg) docker inspect running-upgrade-20210813002835-676638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac60c48a7a59ab21b5d8b4fbfd7c76bbe246d11096f2b880cc4bafcfd3cef334",
	        "Created": "2021-08-13T00:28:36.790538471Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 846810,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T00:28:37.291725268Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/ac60c48a7a59ab21b5d8b4fbfd7c76bbe246d11096f2b880cc4bafcfd3cef334/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac60c48a7a59ab21b5d8b4fbfd7c76bbe246d11096f2b880cc4bafcfd3cef334/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac60c48a7a59ab21b5d8b4fbfd7c76bbe246d11096f2b880cc4bafcfd3cef334/hosts",
	        "LogPath": "/var/lib/docker/containers/ac60c48a7a59ab21b5d8b4fbfd7c76bbe246d11096f2b880cc4bafcfd3cef334/ac60c48a7a59ab21b5d8b4fbfd7c76bbe246d11096f2b880cc4bafcfd3cef334-json.log",
	        "Name": "/running-upgrade-20210813002835-676638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20210813002835-676638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/88d6924f2e4b57f60e975b37a9f5e1c61969167824d849854981ffbf899091e8-init/diff:/var/lib/docker/overlay2/de6af85d43ab6de82a80599c78c852ce945860493e987ae8d4747813e3e12e71/diff:/var/lib/docker/overlay2/1463f2b27e2cf184f9e8a7e127a3f6ecaa9eb4e8c586d13eb98ef0034f418eca/diff:/var/lib/docker/overlay2/6fae380631f93f264fc69450c6bd514661e47e2e598e586796b4ef5487d2609b/diff:/var/lib/docker/overlay2/9455405085a27b776dbc930a9422413a8738ee14a396dba1428ad3477dd78d19/diff:/var/lib/docker/overlay2/872cbd16ad0ea1d1a8643af87081f3ffd14a4cc7bb05e0117ff9630a1e4c2d63/diff:/var/lib/docker/overlay2/1cfe85b8b9110dde1cfd7cd18efd634d01d4c6b46da62d17a26da23aa02686be/diff:/var/lib/docker/overlay2/189b625246c097ae32fa419f11770e2e28b30b39afd65b82dc25c55530584d10/diff:/var/lib/docker/overlay2/f5b5179d9c5187ae940c59c3a026ef190561c0532770dbd761fecfc6251ebc05/diff:/var/lib/docker/overlay2/116a802d8be0890169902c8fcb2ad1b64b5391fa1a060c1f02d344668cf1e40f/diff:/var/lib/docker/overlay2/d335f4
f8874ac51d7120bb297af4bf45b5ab1c41f3977cabfa2149948695c6e9/diff:/var/lib/docker/overlay2/cfc70be91e8c4eaba2033239d05c70abdaaae7922eebe0a9694302cde2259694/diff:/var/lib/docker/overlay2/901fced2d4ec35a47265e02248dd5ae2f3130431109d25e604d2ab568d1bde04/diff:/var/lib/docker/overlay2/7aa7e86939390a956567b669d4bab83fb60927bb30f5a9803342e0d68bd3e23f/diff:/var/lib/docker/overlay2/a482a71267c1aded8aadff398336811f3437dec13bdea6065ac47ad1eb5eed5f/diff:/var/lib/docker/overlay2/972f22e2510a2c07193729807506aedac3ec49bb2063b2b7c3e443b7380a91c5/diff:/var/lib/docker/overlay2/8c845952b97a856c0093d30bbe000f51feda3cb8d3a525e83d8633d5af175938/diff:/var/lib/docker/overlay2/85f0f897ba04db0a863dd2628b8b2e7d3539cecbb6acd1530907b350763c6550/diff:/var/lib/docker/overlay2/f4060f75e85c12bf3ba15020ed3c17665bed2409afc88787b2341c6d5af01040/diff:/var/lib/docker/overlay2/7fa8f93d5ee1866f01fa7288d688713da7f1044a1942eb59534b94cb95cc3d74/diff:/var/lib/docker/overlay2/0d91418cf4c9ce3175fcb432fd443e696caae83859f6d5e10cdfaf102243e189/diff:/var/lib/d
ocker/overlay2/f4f812cd2dd5b0b125eea4bff29d3ed0d34fa877c492159a8b8b6aee1f536d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/88d6924f2e4b57f60e975b37a9f5e1c61969167824d849854981ffbf899091e8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/88d6924f2e4b57f60e975b37a9f5e1c61969167824d849854981ffbf899091e8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/88d6924f2e4b57f60e975b37a9f5e1c61969167824d849854981ffbf899091e8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20210813002835-676638",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20210813002835-676638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20210813002835-676638",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20210813002835-676638",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20210813002835-676638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd126402fe04b318197f2e3a3aab822970a8293346b707fb79548f900d843fcc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33380"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33379"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33378"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dd126402fe04",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "e8abfa988feb2f6c0c104d54e9829026ea0345b1f355f66d66f230b614b1f5ee",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "6667e0301639765b26be7724ab6238ee3df1921ee66a73f6160b0b03ad759362",
	                    "EndpointID": "e8abfa988feb2f6c0c104d54e9829026ea0345b1f355f66d66f230b614b1f5ee",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210813002835-676638 -n running-upgrade-20210813002835-676638

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-20210813002835-676638 -n running-upgrade-20210813002835-676638: exit status 4 (441.041627ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 00:31:03.790208  883086 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20210813002835-676638" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 4 (may be ok)
helpers_test.go:242: "running-upgrade-20210813002835-676638" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:176: Cleaning up "running-upgrade-20210813002835-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210813002835-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210813002835-676638: (3.828745934s)
--- FAIL: TestRunningBinaryUpgrade (151.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (172.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.9.0.953058108.exe start -p stopped-upgrade-20210813002640-676638 --memory=2200 --vm-driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Done: /tmp/minikube-v1.9.0.953058108.exe start -p stopped-upgrade-20210813002640-676638 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m34.593778494s)
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.953058108.exe -p stopped-upgrade-20210813002640-676638 stop
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.953058108.exe -p stopped-upgrade-20210813002640-676638 stop: (1.772675719s)
version_upgrade_test.go:201: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20210813002640-676638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-20210813002640-676638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (1m13.296467458s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210813002640-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	* Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node stopped-upgrade-20210813002640-676638 in cluster stopped-upgrade-20210813002640-676638
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-20210813002640-676638" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:28:17.623488  842771 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:28:17.623595  842771 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:28:17.623599  842771 out.go:311] Setting ErrFile to fd 2...
	I0813 00:28:17.623602  842771 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:28:17.623712  842771 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:28:17.623962  842771 out.go:305] Setting JSON to false
	I0813 00:28:17.665306  842771 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":15059,"bootTime":1628799438,"procs":317,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:28:17.665420  842771 start.go:121] virtualization: kvm guest
	I0813 00:28:17.668813  842771 out.go:177] * [stopped-upgrade-20210813002640-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:28:17.670652  842771 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:28:17.668994  842771 notify.go:169] Checking for updates...
	I0813 00:28:17.672316  842771 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:28:17.674139  842771 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:28:17.676018  842771 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:28:17.676416  842771 start_flags.go:521] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 00:28:17.678807  842771 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0813 00:28:17.678857  842771 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:28:17.745517  842771 docker.go:132] docker version: linux-19.03.15
	I0813 00:28:17.745646  842771 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:28:17.845125  842771 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:6 ContainersRunning:5 ContainersPaused:0 ContainersStopped:1 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:84 SystemTime:2021-08-13 00:28:17.793409447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:28:17.845315  842771 docker.go:244] overlay module found
	I0813 00:28:17.848256  842771 out.go:177] * Using the docker driver based on existing profile
	I0813 00:28:17.848289  842771 start.go:278] selected driver: docker
	I0813 00:28:17.848300  842771 start.go:751] validating driver "docker" against &{Name:stopped-upgrade-20210813002640-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210813002640-676638 Namespace: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:28:17.848404  842771 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 00:28:17.848447  842771 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:28:17.848465  842771 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 00:28:17.850299  842771 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:28:17.851192  842771 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:28:17.946324  842771 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:6 ContainersRunning:5 ContainersPaused:0 ContainersStopped:1 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:83 SystemTime:2021-08-13 00:28:17.895385391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	W0813 00:28:17.946554  842771 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:28:17.946607  842771 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 00:28:17.949187  842771 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:28:17.949300  842771 cni.go:93] Creating CNI manager for ""
	I0813 00:28:17.949319  842771 cni.go:142] EnableDefaultCNI is true, recommending bridge
	I0813 00:28:17.949333  842771 start_flags.go:277] config:
	{Name:stopped-upgrade-20210813002640-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-20210813002640-676638 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:28:17.951495  842771 out.go:177] * Starting control plane node stopped-upgrade-20210813002640-676638 in cluster stopped-upgrade-20210813002640-676638
	I0813 00:28:17.951547  842771 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 00:28:17.953178  842771 out.go:177] * Pulling base image ...
	I0813 00:28:17.953217  842771 preload.go:131] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0813 00:28:17.953308  842771 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	W0813 00:28:17.991967  842771 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0813 00:28:17.992162  842771 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/stopped-upgrade-20210813002640-676638/config.json ...
	I0813 00:28:17.992305  842771 cache.go:108] acquiring lock: {Name:mkebf5e5183b3fe1832480b10a0767c0216ef0fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:28:17.992533  842771 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0813 00:28:17.992522  842771 cache.go:108] acquiring lock: {Name:mk2f8f4570a6c789373dfd171ca45edd03578901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:28:17.992569  842771 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 284.518µs
	I0813 00:28:17.992585  842771 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0813 00:28:17.992542  842771 cache.go:108] acquiring lock: {Name:mkaad6b66e2dc046b4e0ed6a834b4d5311810374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:28:17.992601  842771 cache.go:108] acquiring lock: {Name:mkd01ce055fc376d1e00625138b7b37ece1a1361 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:28:17.992595  842771 cache.go:108] acquiring lock: {Name:mk083d4b02aa9d0f9be1d05f8e6c4194cbc5f200 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:28:17.992594  842771 cache.go:108] acquiring lock: {Name:mk36aa2c825ea6a2262161b88564609f4d60e208 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:28:17.992662  842771 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0813 00:28:17.992644  842771 cache.go:108] acquiring lock: {Name:mk86691a35a6e63905cd7f5eb05f6d130611ae9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:28:17.992679  842771 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 80.947µs
	I0813 00:28:17.992694  842771 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0813 00:28:17.992686  842771 cache.go:108] acquiring lock: {Name:mk7e91618de7d10682336f6b0a703a9b3cc6d7c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:28:17.992718  842771 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
	I0813 00:28:17.992303  842771 cache.go:108] acquiring lock: {Name:mk93fe9de025e2f7d4c900d8f720f3f08473ba0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:28:17.992739  842771 cache.go:97] cache image "k8s.gcr.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 96.749µs
	I0813 00:28:17.992759  842771 cache.go:81] save to tar file k8s.gcr.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
	I0813 00:28:17.992740  842771 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
	I0813 00:28:17.992776  842771 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0813 00:28:17.992781  842771 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 297.628µs
	I0813 00:28:17.992794  842771 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
	I0813 00:28:17.992805  842771 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
	I0813 00:28:17.992799  842771 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 244.168µs
	I0813 00:28:17.992799  842771 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0813 00:28:17.992819  842771 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0813 00:28:17.992820  842771 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 532.453µs
	I0813 00:28:17.992839  842771 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
	I0813 00:28:17.992835  842771 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 409.624µs
	I0813 00:28:17.992851  842771 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0813 00:28:17.992626  842771 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
	I0813 00:28:17.992303  842771 cache.go:108] acquiring lock: {Name:mk9074d385c3911880d5dcb8de8278c6b7e76cfa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:28:17.992886  842771 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 370.862µs
	I0813 00:28:17.992926  842771 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
	I0813 00:28:17.992929  842771 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0813 00:28:17.992957  842771 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
	I0813 00:28:17.992976  842771 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 684.347µs
	I0813 00:28:17.992977  842771 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 429.937µs
	I0813 00:28:17.992993  842771 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0813 00:28:17.992990  842771 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
	I0813 00:28:17.993007  842771 cache.go:88] Successfully saved all images to host disk.
	I0813 00:28:18.063368  842771 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 00:28:18.063410  842771 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 00:28:18.063434  842771 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:28:18.063489  842771 start.go:313] acquiring machines lock for stopped-upgrade-20210813002640-676638: {Name:mk8a642b31be23d9ecb477e07527d885498eb47c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:28:18.063681  842771 start.go:317] acquired machines lock for "stopped-upgrade-20210813002640-676638" in 163.616µs
	I0813 00:28:18.063712  842771 start.go:93] Skipping create...Using existing machine configuration
	I0813 00:28:18.063723  842771 fix.go:55] fixHost starting: m01
	I0813 00:28:18.064086  842771 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210813002640-676638 --format={{.State.Status}}
	I0813 00:28:18.115365  842771 fix.go:108] recreateIfNeeded on stopped-upgrade-20210813002640-676638: state=Stopped err=<nil>
	W0813 00:28:18.115435  842771 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 00:28:18.118558  842771 out.go:177] * Restarting existing docker container for "stopped-upgrade-20210813002640-676638" ...
	I0813 00:28:18.118655  842771 cli_runner.go:115] Run: docker start stopped-upgrade-20210813002640-676638
	I0813 00:28:18.993394  842771 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20210813002640-676638 --format={{.State.Status}}
	I0813 00:28:19.042906  842771 kic.go:420] container "stopped-upgrade-20210813002640-676638" state is running.
	I0813 00:28:19.043340  842771 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210813002640-676638
	I0813 00:28:19.088073  842771 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/stopped-upgrade-20210813002640-676638/config.json ...
	I0813 00:28:19.088320  842771 machine.go:88] provisioning docker machine ...
	I0813 00:28:19.088366  842771 ubuntu.go:169] provisioning hostname "stopped-upgrade-20210813002640-676638"
	I0813 00:28:19.088431  842771 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813002640-676638
	I0813 00:28:19.133566  842771 main.go:130] libmachine: Using SSH client type: native
	I0813 00:28:19.133795  842771 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33377 <nil> <nil>}
	I0813 00:28:19.133817  842771 main.go:130] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-20210813002640-676638 && echo "stopped-upgrade-20210813002640-676638" | sudo tee /etc/hostname
	I0813 00:28:19.134590  842771 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44236->127.0.0.1:33377: read: connection reset by peer
	I0813 00:28:22.136208  842771 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44570->127.0.0.1:33377: read: connection reset by peer
	I0813 00:28:25.137364  842771 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44720->127.0.0.1:33377: read: connection reset by peer
	I0813 00:28:28.573412  842771 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44842->127.0.0.1:33377: read: connection reset by peer
	I0813 00:28:32.620120  842771 main.go:130] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-20210813002640-676638
	
	I0813 00:28:32.620189  842771 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813002640-676638
	I0813 00:28:32.664824  842771 main.go:130] libmachine: Using SSH client type: native
	I0813 00:28:32.665050  842771 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33377 <nil> <nil>}
	I0813 00:28:32.665089  842771 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-20210813002640-676638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-20210813002640-676638/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-20210813002640-676638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:28:32.777123  842771 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:28:32.777157  842771 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:28:32.777206  842771 ubuntu.go:177] setting up certificates
	I0813 00:28:32.777218  842771 provision.go:83] configureAuth start
	I0813 00:28:32.777325  842771 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210813002640-676638
	I0813 00:28:32.838322  842771 provision.go:137] copyHostCerts
	I0813 00:28:32.838399  842771 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:28:32.838413  842771 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:28:32.838477  842771 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1082 bytes)
	I0813 00:28:32.838593  842771 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:28:32.838615  842771 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:28:32.838644  842771 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:28:32.838699  842771 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:28:32.838709  842771 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:28:32.838731  842771 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0813 00:28:32.838777  842771 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-20210813002640-676638 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-20210813002640-676638]
	I0813 00:28:32.950141  842771 provision.go:171] copyRemoteCerts
	I0813 00:28:32.950206  842771 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:28:32.950252  842771 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813002640-676638
	I0813 00:28:33.013718  842771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/stopped-upgrade-20210813002640-676638/id_rsa Username:docker}
	I0813 00:28:33.098009  842771 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0813 00:28:33.133447  842771 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 00:28:33.171040  842771 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 00:28:33.188148  842771 provision.go:86] duration metric: configureAuth took 410.878828ms
	I0813 00:28:33.188175  842771 ubuntu.go:193] setting minikube options for container-runtime
	I0813 00:28:33.188424  842771 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813002640-676638
	I0813 00:28:33.260049  842771 main.go:130] libmachine: Using SSH client type: native
	I0813 00:28:33.260267  842771 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33377 <nil> <nil>}
	I0813 00:28:33.260290  842771 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:28:34.053481  842771 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:28:34.053509  842771 machine.go:91] provisioned docker machine in 14.965169615s
	I0813 00:28:34.053522  842771 start.go:267] post-start starting for "stopped-upgrade-20210813002640-676638" (driver="docker")
	I0813 00:28:34.053530  842771 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:28:34.053588  842771 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:28:34.053639  842771 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813002640-676638
	I0813 00:28:34.098000  842771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/stopped-upgrade-20210813002640-676638/id_rsa Username:docker}
	I0813 00:28:34.181541  842771 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:28:34.184743  842771 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 00:28:34.184768  842771 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 00:28:34.184776  842771 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 00:28:34.184782  842771 info.go:137] Remote host: Ubuntu 19.10
	I0813 00:28:34.184792  842771 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:28:34.184890  842771 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:28:34.185011  842771 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> 6766382.pem in /etc/ssl/certs
	I0813 00:28:34.185141  842771 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:28:34.193185  842771 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:28:34.213638  842771 start.go:270] post-start completed in 160.097805ms
	I0813 00:28:34.213724  842771 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:28:34.213775  842771 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813002640-676638
	I0813 00:28:34.266403  842771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/stopped-upgrade-20210813002640-676638/id_rsa Username:docker}
	I0813 00:28:34.351280  842771 fix.go:57] fixHost completed within 16.28755155s
	I0813 00:28:34.351302  842771 start.go:80] releasing machines lock for "stopped-upgrade-20210813002640-676638", held for 16.28760514s
	I0813 00:28:34.351383  842771 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20210813002640-676638
	I0813 00:28:34.421511  842771 ssh_runner.go:149] Run: systemctl --version
	I0813 00:28:34.421548  842771 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:28:34.421586  842771 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813002640-676638
	I0813 00:28:34.421614  842771 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20210813002640-676638
	I0813 00:28:34.496464  842771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/stopped-upgrade-20210813002640-676638/id_rsa Username:docker}
	I0813 00:28:34.500676  842771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/stopped-upgrade-20210813002640-676638/id_rsa Username:docker}
	I0813 00:28:34.645977  842771 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:28:34.678989  842771 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:28:34.688701  842771 docker.go:153] disabling docker service ...
	I0813 00:28:34.688782  842771 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:28:34.705894  842771 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:28:34.717316  842771 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:28:34.777072  842771 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:28:34.878548  842771 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:28:34.891013  842771 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:28:34.906908  842771 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
	I0813 00:28:34.915335  842771 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:28:34.922559  842771 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:28:34.922616  842771 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:28:34.930118  842771 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:28:34.936977  842771 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:28:34.987082  842771 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:28:35.072408  842771 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:28:35.072497  842771 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:28:35.075846  842771 retry.go:31] will retry after 1.104660288s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:28:36.181201  842771 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:28:36.185038  842771 retry.go:31] will retry after 2.160763633s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:28:38.347122  842771 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:28:38.351579  842771 retry.go:31] will retry after 2.62026012s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:28:40.973381  842771 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:28:40.977198  842771 retry.go:31] will retry after 3.164785382s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:28:44.142228  842771 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:28:44.146284  842771 retry.go:31] will retry after 4.680977329s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:28:48.827859  842771 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:28:48.831802  842771 retry.go:31] will retry after 9.01243771s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:28:57.844775  842771 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:28:57.849029  842771 retry.go:31] will retry after 6.442959172s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:29:04.292676  842771 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:29:04.297022  842771 retry.go:31] will retry after 11.217246954s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:29:15.514510  842771 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:29:15.518581  842771 retry.go:31] will retry after 15.299675834s: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	I0813 00:29:30.819394  842771 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:29:30.828263  842771 out.go:177] 
	W0813 00:29:30.828459  842771 out.go:242] X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	X Exiting due to RUNTIME_ENABLE: stat /var/run/crio/crio.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/run/crio/crio.sock': Permission denied
	
	W0813 00:29:30.828474  842771 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 00:29:30.831192  842771 out.go:242] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 00:29:30.832617  842771 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:203: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-20210813002640-676638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:613: *** TestStoppedBinaryUpgrade FAILED at 2021-08-13 00:29:30.854907422 +0000 UTC m=+2089.466903832
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStoppedBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect stopped-upgrade-20210813002640-676638
helpers_test.go:236: (dbg) docker inspect stopped-upgrade-20210813002640-676638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "caf73d1703b7253c05d76b57bf9e97d987dc2192a24688c231ff95f706ea17b8",
	        "Created": "2021-08-13T00:26:42.126935547Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 843204,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T00:28:18.98428238Z",
	            "FinishedAt": "2021-08-13T00:28:17.093181061Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/caf73d1703b7253c05d76b57bf9e97d987dc2192a24688c231ff95f706ea17b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/caf73d1703b7253c05d76b57bf9e97d987dc2192a24688c231ff95f706ea17b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/caf73d1703b7253c05d76b57bf9e97d987dc2192a24688c231ff95f706ea17b8/hosts",
	        "LogPath": "/var/lib/docker/containers/caf73d1703b7253c05d76b57bf9e97d987dc2192a24688c231ff95f706ea17b8/caf73d1703b7253c05d76b57bf9e97d987dc2192a24688c231ff95f706ea17b8-json.log",
	        "Name": "/stopped-upgrade-20210813002640-676638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "stopped-upgrade-20210813002640-676638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9dae8fbe8874305d8189e93289d8781f4fef1a0877549008dbbdb5342a468f92-init/diff:/var/lib/docker/overlay2/de6af85d43ab6de82a80599c78c852ce945860493e987ae8d4747813e3e12e71/diff:/var/lib/docker/overlay2/1463f2b27e2cf184f9e8a7e127a3f6ecaa9eb4e8c586d13eb98ef0034f418eca/diff:/var/lib/docker/overlay2/6fae380631f93f264fc69450c6bd514661e47e2e598e586796b4ef5487d2609b/diff:/var/lib/docker/overlay2/9455405085a27b776dbc930a9422413a8738ee14a396dba1428ad3477dd78d19/diff:/var/lib/docker/overlay2/872cbd16ad0ea1d1a8643af87081f3ffd14a4cc7bb05e0117ff9630a1e4c2d63/diff:/var/lib/docker/overlay2/1cfe85b8b9110dde1cfd7cd18efd634d01d4c6b46da62d17a26da23aa02686be/diff:/var/lib/docker/overlay2/189b625246c097ae32fa419f11770e2e28b30b39afd65b82dc25c55530584d10/diff:/var/lib/docker/overlay2/f5b5179d9c5187ae940c59c3a026ef190561c0532770dbd761fecfc6251ebc05/diff:/var/lib/docker/overlay2/116a802d8be0890169902c8fcb2ad1b64b5391fa1a060c1f02d344668cf1e40f/diff:/var/lib/docker/overlay2/d335f4
f8874ac51d7120bb297af4bf45b5ab1c41f3977cabfa2149948695c6e9/diff:/var/lib/docker/overlay2/cfc70be91e8c4eaba2033239d05c70abdaaae7922eebe0a9694302cde2259694/diff:/var/lib/docker/overlay2/901fced2d4ec35a47265e02248dd5ae2f3130431109d25e604d2ab568d1bde04/diff:/var/lib/docker/overlay2/7aa7e86939390a956567b669d4bab83fb60927bb30f5a9803342e0d68bd3e23f/diff:/var/lib/docker/overlay2/a482a71267c1aded8aadff398336811f3437dec13bdea6065ac47ad1eb5eed5f/diff:/var/lib/docker/overlay2/972f22e2510a2c07193729807506aedac3ec49bb2063b2b7c3e443b7380a91c5/diff:/var/lib/docker/overlay2/8c845952b97a856c0093d30bbe000f51feda3cb8d3a525e83d8633d5af175938/diff:/var/lib/docker/overlay2/85f0f897ba04db0a863dd2628b8b2e7d3539cecbb6acd1530907b350763c6550/diff:/var/lib/docker/overlay2/f4060f75e85c12bf3ba15020ed3c17665bed2409afc88787b2341c6d5af01040/diff:/var/lib/docker/overlay2/7fa8f93d5ee1866f01fa7288d688713da7f1044a1942eb59534b94cb95cc3d74/diff:/var/lib/docker/overlay2/0d91418cf4c9ce3175fcb432fd443e696caae83859f6d5e10cdfaf102243e189/diff:/var/lib/d
ocker/overlay2/f4f812cd2dd5b0b125eea4bff29d3ed0d34fa877c492159a8b8b6aee1f536d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9dae8fbe8874305d8189e93289d8781f4fef1a0877549008dbbdb5342a468f92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9dae8fbe8874305d8189e93289d8781f4fef1a0877549008dbbdb5342a468f92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9dae8fbe8874305d8189e93289d8781f4fef1a0877549008dbbdb5342a468f92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "stopped-upgrade-20210813002640-676638",
	                "Source": "/var/lib/docker/volumes/stopped-upgrade-20210813002640-676638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "stopped-upgrade-20210813002640-676638",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "stopped-upgrade-20210813002640-676638",
	                "name.minikube.sigs.k8s.io": "stopped-upgrade-20210813002640-676638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99d884d04080989420d4da62a8c8f873a6564a6bdd2738d7f10e45e34e6fbecb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33377"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33376"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33375"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/99d884d04080",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "a98af8494216811cd2197cb7e318a0f1bf639d5c924e1539ddb78f7e04a734ee",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "6667e0301639765b26be7724ab6238ee3df1921ee66a73f6160b0b03ad759362",
	                    "EndpointID": "a98af8494216811cd2197cb7e318a0f1bf639d5c924e1539ddb78f7e04a734ee",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210813002640-676638 -n stopped-upgrade-20210813002640-676638
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p stopped-upgrade-20210813002640-676638 -n stopped-upgrade-20210813002640-676638: exit status 6 (325.987156ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 00:29:31.232487  860603 status.go:413] kubeconfig endpoint: extract IP: "stopped-upgrade-20210813002640-676638" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 6 (may be ok)
helpers_test.go:242: "stopped-upgrade-20210813002640-676638" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:176: Cleaning up "stopped-upgrade-20210813002640-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p stopped-upgrade-20210813002640-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20210813002640-676638: (2.567815239s)
--- FAIL: TestStoppedBinaryUpgrade (172.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1656.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210813003041-676638 --alsologtostderr -v=3
E0813 00:33:10.030771  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:33:21.597614  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-20210813003041-676638 --alsologtostderr -v=3: signal: killed (27m32.704070294s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-20210813003041-676638"  ...
	* Powering off "no-preload-20210813003041-676638" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:33:08.486185  900190 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:33:08.486303  900190 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:33:08.486311  900190 out.go:311] Setting ErrFile to fd 2...
	I0813 00:33:08.486315  900190 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:33:08.486419  900190 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:33:08.486597  900190 out.go:305] Setting JSON to false
	I0813 00:33:08.486680  900190 mustload.go:65] Loading cluster: no-preload-20210813003041-676638
	I0813 00:33:08.487025  900190 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813003041-676638/config.json ...
	I0813 00:33:08.487192  900190 mustload.go:65] Loading cluster: no-preload-20210813003041-676638
	I0813 00:33:08.487338  900190 stop.go:39] StopHost: no-preload-20210813003041-676638
	I0813 00:33:08.490214  900190 out.go:177] * Stopping node "no-preload-20210813003041-676638"  ...
	I0813 00:33:08.490317  900190 cli_runner.go:115] Run: docker container inspect no-preload-20210813003041-676638 --format={{.State.Status}}
	I0813 00:33:08.546388  900190 out.go:177] * Powering off "no-preload-20210813003041-676638" via SSH ...
	I0813 00:33:08.546495  900190 cli_runner.go:115] Run: docker exec --privileged -t no-preload-20210813003041-676638 /bin/bash -c "sudo init 0"
	I0813 00:33:09.744855  900190 cli_runner.go:115] Run: docker container inspect no-preload-20210813003041-676638 --format={{.State.Status}}
	I0813 00:33:09.792601  900190 oci.go:646] temporary error: container no-preload-20210813003041-676638 status is Running but expect it to be exited
	I0813 00:33:09.792674  900190 oci.go:652] Successfully shutdown container no-preload-20210813003041-676638
	I0813 00:33:09.792681  900190 stop.go:88] shutdown container: err=<nil>
	I0813 00:33:09.792739  900190 main.go:130] libmachine: Stopping "no-preload-20210813003041-676638"...
	I0813 00:33:09.792821  900190 cli_runner.go:115] Run: docker container inspect no-preload-20210813003041-676638 --format={{.State.Status}}
	I0813 00:33:09.835011  900190 kic_runner.go:94] Run: systemctl --version
	I0813 00:33:09.835033  900190 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210813003041-676638 systemctl --version]
	I0813 00:33:09.958461  900190 kic_runner.go:94] Run: sudo systemctl stop kubelet
	I0813 00:33:09.958486  900190 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210813003041-676638 sudo systemctl stop kubelet]
	I0813 00:33:10.086380  900190 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0813 00:33:10.086480  900190 kic_runner.go:94] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0813 00:33:10.086502  900190 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210813003041-676638 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I0813 00:33:18.286394  900190 kic.go:456] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
	stdout:
	
	stderr:
	time="2021-08-13T00:33:12Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-13T00:33:14Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-13T00:33:16Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	time="2021-08-13T00:33:18Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	I0813 00:33:18.286442  900190 kic.go:466] successfully stopped kubernetes!
	I0813 00:33:18.286503  900190 kic_runner.go:94] Run: pgrep kube-apiserver
	I0813 00:33:18.286516  900190 kic_runner.go:115] Args: [docker exec --privileged no-preload-20210813003041-676638 pgrep kube-apiserver]

                                                
                                                
** /stderr **
start_stop_delete_test.go:203: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-20210813003041-676638 --alsologtostderr -v=3" : signal: killed
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect no-preload-20210813003041-676638
helpers_test.go:236: (dbg) docker inspect no-preload-20210813003041-676638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bed1ace63349e417b12849bf057d4ce6612e0400c282af34395afe4e66b4d503",
	        "Created": "2021-08-13T00:30:43.317773136Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 878710,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T00:30:43.882031797Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/bed1ace63349e417b12849bf057d4ce6612e0400c282af34395afe4e66b4d503/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bed1ace63349e417b12849bf057d4ce6612e0400c282af34395afe4e66b4d503/hostname",
	        "HostsPath": "/var/lib/docker/containers/bed1ace63349e417b12849bf057d4ce6612e0400c282af34395afe4e66b4d503/hosts",
	        "LogPath": "/var/lib/docker/containers/bed1ace63349e417b12849bf057d4ce6612e0400c282af34395afe4e66b4d503/bed1ace63349e417b12849bf057d4ce6612e0400c282af34395afe4e66b4d503-json.log",
	        "Name": "/no-preload-20210813003041-676638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20210813003041-676638:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20210813003041-676638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b0ba620139c941ff41c586298192b34e27be109e08f3129564cf34668a9e0290-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0ba620139c941ff41c586298192b34e27be109e08f3129564cf34668a9e0290/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0ba620139c941ff41c586298192b34e27be109e08f3129564cf34668a9e0290/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0ba620139c941ff41c586298192b34e27be109e08f3129564cf34668a9e0290/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20210813003041-676638",
	                "Source": "/var/lib/docker/volumes/no-preload-20210813003041-676638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20210813003041-676638",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20210813003041-676638",
	                "name.minikube.sigs.k8s.io": "no-preload-20210813003041-676638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfcf23cc8c74e90f5590653ddc20a80c6747d9a1c6e0e3c8199e2c13e321f0d0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dfcf23cc8c74",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20210813003041-676638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bed1ace63349"
	                    ],
	                    "NetworkID": "68254c42997b28a759338c9f18571d9a7a3ebe8f27d5173f19b5f59fd9ac78e8",
	                    "EndpointID": "cc0c7995377273ed8e1cb3936d960ab56ddc7f43973e235c685357ff6a4cea33",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813003041-676638 -n no-preload-20210813003041-676638
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210813003041-676638 -n no-preload-20210813003041-676638: exit status 3 (3.332388942s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 01:00:44.495377  987876 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52562->127.0.0.1:33416: read: connection reset by peer
	E0813 01:00:44.495404  987876 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52562->127.0.0.1:33416: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "no-preload-20210813003041-676638" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (1656.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210813003107-676638 --alsologtostderr -v=1
E0813 00:42:27.698313  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-20210813003107-676638 --alsologtostderr -v=1: exit status 80 (2.411906935s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20210813003107-676638 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:42:26.885582  946213 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:42:26.885710  946213 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:42:26.885718  946213 out.go:311] Setting ErrFile to fd 2...
	I0813 00:42:26.885722  946213 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:42:26.885841  946213 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:42:26.886054  946213 out.go:305] Setting JSON to false
	I0813 00:42:26.886077  946213 mustload.go:65] Loading cluster: embed-certs-20210813003107-676638
	I0813 00:42:26.887986  946213 cli_runner.go:115] Run: docker container inspect embed-certs-20210813003107-676638 --format={{.State.Status}}
	I0813 00:42:26.931214  946213 host.go:66] Checking if "embed-certs-20210813003107-676638" exists ...
	I0813 00:42:26.931955  946213 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192
.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12122/minikube-v1.22.0-1628238775-12122.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628238775-12122/minikube-v1.22.0-1628238775-12122.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628238775-12122.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-sh
ares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-20210813003107-676638 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0813 00:42:26.935088  946213 out.go:177] * Pausing node embed-certs-20210813003107-676638 ... 
	I0813 00:42:26.935125  946213 host.go:66] Checking if "embed-certs-20210813003107-676638" exists ...
	I0813 00:42:26.935446  946213 ssh_runner.go:149] Run: systemctl --version
	I0813 00:42:26.935489  946213 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813003107-676638
	I0813 00:42:26.979874  946213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813003107-676638/id_rsa Username:docker}
	I0813 00:42:27.070007  946213 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:42:27.079725  946213 pause.go:50] kubelet running: true
	I0813 00:42:27.079789  946213 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 00:42:27.241830  946213 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0813 00:42:27.518300  946213 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:42:27.528565  946213 pause.go:50] kubelet running: true
	I0813 00:42:27.528633  946213 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 00:42:27.683959  946213 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0813 00:42:28.224389  946213 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:42:28.235234  946213 pause.go:50] kubelet running: true
	I0813 00:42:28.235353  946213 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 00:42:28.398034  946213 retry.go:31] will retry after 655.06503ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0813 00:42:29.053437  946213 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:42:29.064516  946213 pause.go:50] kubelet running: true
	I0813 00:42:29.064648  946213 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0813 00:42:29.231968  946213 out.go:177] 
	W0813 00:42:29.232211  946213 out.go:242] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0813 00:42:29.232231  946213 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 00:42:29.240196  946213 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0813 00:42:29.242443  946213 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p embed-certs-20210813003107-676638 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210813003107-676638
helpers_test.go:236: (dbg) docker inspect embed-certs-20210813003107-676638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1",
	        "Created": "2021-08-13T00:34:27.125416283Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 911309,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T00:36:27.05449191Z",
	            "FinishedAt": "2021-08-13T00:36:24.417108562Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/hosts",
	        "LogPath": "/var/lib/docker/containers/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1-json.log",
	        "Name": "/embed-certs-20210813003107-676638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20210813003107-676638:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210813003107-676638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/45f753af1b8d453e1829ac66a826b1c26542343a3cee2ec3f5d9a77de7aa47f7-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45f753af1b8d453e1829ac66a826b1c26542343a3cee2ec3f5d9a77de7aa47f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45f753af1b8d453e1829ac66a826b1c26542343a3cee2ec3f5d9a77de7aa47f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45f753af1b8d453e1829ac66a826b1c26542343a3cee2ec3f5d9a77de7aa47f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210813003107-676638",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210813003107-676638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210813003107-676638",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210813003107-676638",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210813003107-676638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3c13ebe772652424ec897ff3791fae9f68f6f35c3382eb04ec476c0e2bed18be",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3c13ebe77265",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210813003107-676638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f6d10369f2ec"
	                    ],
	                    "NetworkID": "c83c1e95b109d6be271c44cd1a18ca39647f3071af7bf6561f51b9b8be426451",
	                    "EndpointID": "9583045cc74d797afcc2fa9e57f14b58e176f7c2fd07691535f5370dda986ea4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813003107-676638 -n embed-certs-20210813003107-676638
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210813003107-676638 logs -n 25
E0813 00:42:30.259391  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20210813003107-676638 logs -n 25: (1.044233805s)
helpers_test.go:253: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                         | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:31:07 UTC | Fri, 13 Aug 2021 00:35:55 UTC |
	|         | embed-certs-20210813003107-676638                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:36:03 UTC | Fri, 13 Aug 2021 00:36:03 UTC |
	|         | embed-certs-20210813003107-676638                          |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:36:03 UTC | Fri, 13 Aug 2021 00:36:25 UTC |
	|         | embed-certs-20210813003107-676638                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:36:25 UTC | Fri, 13 Aug 2021 00:36:25 UTC |
	|         | embed-certs-20210813003107-676638                          |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:32:54 UTC | Fri, 13 Aug 2021 00:38:42 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:38:53 UTC | Fri, 13 Aug 2021 00:38:53 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:38:53 UTC | Fri, 13 Aug 2021 00:38:54 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:38:54 UTC | Fri, 13 Aug 2021 00:38:55 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:38:56 UTC | Fri, 13 Aug 2021 00:39:00 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:39:00 UTC | Fri, 13 Aug 2021 00:39:01 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20210813003901-676638 --memory=2200          | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:39:01 UTC | Fri, 13 Aug 2021 00:39:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:39:51 UTC | Fri, 13 Aug 2021 00:39:51 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:39:51 UTC | Fri, 13 Aug 2021 00:40:12 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:12 UTC | Fri, 13 Aug 2021 00:40:12 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20210813003901-676638 --memory=2200          | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:12 UTC | Fri, 13 Aug 2021 00:40:38 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:38 UTC | Fri, 13 Aug 2021 00:40:39 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:39 UTC | Fri, 13 Aug 2021 00:40:39 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:40 UTC | Fri, 13 Aug 2021 00:40:41 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:41 UTC | Fri, 13 Aug 2021 00:40:45 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:45 UTC | Fri, 13 Aug 2021 00:40:46 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	| start   | -p auto-20210813002925-676638                              | auto-20210813002925-676638                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:46 UTC | Fri, 13 Aug 2021 00:41:55 UTC |
	|         | --memory=2048                                              |                                                  |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                  |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                  |         |         |                               |                               |
	| ssh     | -p auto-20210813002925-676638                              | auto-20210813002925-676638                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:41:55 UTC | Fri, 13 Aug 2021 00:41:56 UTC |
	|         | pgrep -a kubelet                                           |                                                  |         |         |                               |                               |
	| delete  | -p auto-20210813002925-676638                              | auto-20210813002925-676638                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:42:06 UTC | Fri, 13 Aug 2021 00:42:09 UTC |
	| start   | -p                                                         | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:36:25 UTC | Fri, 13 Aug 2021 00:42:15 UTC |
	|         | embed-certs-20210813003107-676638                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:42:26 UTC | Fri, 13 Aug 2021 00:42:26 UTC |
	|         | embed-certs-20210813003107-676638                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 00:42:09
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 00:42:09.814445  943278 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:42:09.814557  943278 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:42:09.814566  943278 out.go:311] Setting ErrFile to fd 2...
	I0813 00:42:09.814571  943278 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:42:09.814702  943278 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:42:09.815004  943278 out.go:305] Setting JSON to false
	I0813 00:42:09.855737  943278 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":15891,"bootTime":1628799438,"procs":336,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:42:09.855857  943278 start.go:121] virtualization: kvm guest
	I0813 00:42:09.858371  943278 out.go:177] * [custom-weave-20210813002927-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:42:09.860165  943278 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:42:09.858534  943278 notify.go:169] Checking for updates...
	I0813 00:42:09.861966  943278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:42:09.863758  943278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:42:09.865538  943278 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:42:09.866368  943278 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:42:09.939678  943278 docker.go:132] docker version: linux-19.03.15
	I0813 00:42:09.939803  943278 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:42:10.049698  943278 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-13 00:42:09.9797783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:42:10.049787  943278 docker.go:244] overlay module found
	I0813 00:42:10.052658  943278 out.go:177] * Using the docker driver based on user configuration
	I0813 00:42:10.052691  943278 start.go:278] selected driver: docker
	I0813 00:42:10.052698  943278 start.go:751] validating driver "docker" against <nil>
	I0813 00:42:10.052719  943278 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 00:42:10.052774  943278 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:42:10.052792  943278 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 00:42:10.054605  943278 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:42:10.055560  943278 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:42:10.165323  943278 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-13 00:42:10.10973775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:42:10.165485  943278 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 00:42:10.165637  943278 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 00:42:10.165659  943278 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0813 00:42:10.165677  943278 start_flags.go:272] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0813 00:42:10.165686  943278 start_flags.go:277] config:
	{Name:custom-weave-20210813002927-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813002927-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:42:10.168179  943278 out.go:177] * Starting control plane node custom-weave-20210813002927-676638 in cluster custom-weave-20210813002927-676638
	I0813 00:42:10.168238  943278 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 00:42:07.086898  911032 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 00:42:07.086972  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 00:42:07.086985  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 00:42:07.087035  911032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813003107-676638
	I0813 00:42:07.111629  911032 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210813003107-676638"
	W0813 00:42:07.111676  911032 addons.go:147] addon default-storageclass should already be in state true
	I0813 00:42:07.111713  911032 host.go:66] Checking if "embed-certs-20210813003107-676638" exists ...
	I0813 00:42:07.112265  911032 cli_runner.go:115] Run: docker container inspect embed-certs-20210813003107-676638 --format={{.State.Status}}
	I0813 00:42:07.141052  911032 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210813003107-676638" to be "Ready" ...
	I0813 00:42:07.141541  911032 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 00:42:07.156131  911032 node_ready.go:49] node "embed-certs-20210813003107-676638" has status "Ready":"True"
	I0813 00:42:07.156158  911032 node_ready.go:38] duration metric: took 15.06995ms waiting for node "embed-certs-20210813003107-676638" to be "Ready" ...
	I0813 00:42:07.156172  911032 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:42:07.161432  911032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813003107-676638/id_rsa Username:docker}
	I0813 00:42:07.163140  911032 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-5tbf8" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:07.170142  911032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813003107-676638/id_rsa Username:docker}
	I0813 00:42:07.170737  911032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813003107-676638/id_rsa Username:docker}
	I0813 00:42:07.179801  911032 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 00:42:07.179827  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 00:42:07.179881  911032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813003107-676638
	I0813 00:42:07.242728  911032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813003107-676638/id_rsa Username:docker}
	I0813 00:42:07.413470  911032 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:42:07.414525  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 00:42:07.414549  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 00:42:07.494634  911032 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 00:42:07.494662  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 00:42:07.513155  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 00:42:07.513196  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 00:42:07.590359  911032 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 00:42:07.590406  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 00:42:07.612495  911032 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 00:42:07.702652  911032 pod_ready.go:97] error getting pod "coredns-558bd4d5db-5tbf8" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-5tbf8" not found
	I0813 00:42:07.702685  911032 pod_ready.go:81] duration metric: took 539.513964ms waiting for pod "coredns-558bd4d5db-5tbf8" in "kube-system" namespace to be "Ready" ...
	E0813 00:42:07.702700  911032 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-5tbf8" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-5tbf8" not found
	I0813 00:42:07.702709  911032 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:07.722434  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 00:42:07.722467  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 00:42:07.795457  911032 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 00:42:07.795486  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 00:42:07.815964  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 00:42:07.815995  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 00:42:07.828521  911032 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 00:42:07.920836  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 00:42:07.920868  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 00:42:08.007454  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 00:42:08.007486  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 00:42:08.016468  911032 start.go:736] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0813 00:42:08.107832  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 00:42:08.107865  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 00:42:08.214566  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 00:42:08.214602  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 00:42:08.310259  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 00:42:08.310293  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 00:42:08.506021  911032 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 00:42:08.819783  911032 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.406216686s)
	I0813 00:42:08.819858  911032 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.207323507s)
	I0813 00:42:09.403819  911032 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.575244401s)
	I0813 00:42:09.403863  911032 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210813003107-676638"
	I0813 00:42:09.715781  911032 pod_ready.go:102] pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:10.169914  943278 out.go:177] * Pulling base image ...
	I0813 00:42:10.169991  943278 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:42:10.170041  943278 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 00:42:10.170050  943278 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 00:42:10.170056  943278 cache.go:56] Caching tarball of preloaded images
	I0813 00:42:10.170408  943278 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 00:42:10.170444  943278 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 00:42:10.170566  943278 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/config.json ...
	I0813 00:42:10.170587  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/config.json: {Name:mkc852494605645a232bde25ec20290f1d76e998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:10.283460  943278 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 00:42:10.283499  943278 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 00:42:10.283514  943278 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:42:10.283557  943278 start.go:313] acquiring machines lock for custom-weave-20210813002927-676638: {Name:mk111d3a4b930a37a53e0c69523046f37729edd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:42:10.283720  943278 start.go:317] acquired machines lock for "custom-weave-20210813002927-676638" in 142.145µs
	I0813 00:42:10.283753  943278 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20210813002927-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813002927-676638 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:42:10.283828  943278 start.go:126] createHost starting for "" (driver="docker")
	I0813 00:42:10.298925  911032 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.792845826s)
	I0813 00:42:08.545000  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:10.545533  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:10.286743  943278 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 00:42:10.287053  943278 start.go:160] libmachine.API.Create for "custom-weave-20210813002927-676638" (driver="docker")
	I0813 00:42:10.287090  943278 client.go:168] LocalClient.Create starting
	I0813 00:42:10.287181  943278 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 00:42:10.287212  943278 main.go:130] libmachine: Decoding PEM data...
	I0813 00:42:10.287232  943278 main.go:130] libmachine: Parsing certificate...
	I0813 00:42:10.287359  943278 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 00:42:10.287377  943278 main.go:130] libmachine: Decoding PEM data...
	I0813 00:42:10.287387  943278 main.go:130] libmachine: Parsing certificate...
	I0813 00:42:10.287710  943278 cli_runner.go:115] Run: docker network inspect custom-weave-20210813002927-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 00:42:10.338481  943278 cli_runner.go:162] docker network inspect custom-weave-20210813002927-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 00:42:10.338559  943278 network_create.go:255] running [docker network inspect custom-weave-20210813002927-676638] to gather additional debugging logs...
	I0813 00:42:10.338580  943278 cli_runner.go:115] Run: docker network inspect custom-weave-20210813002927-676638
	W0813 00:42:10.383997  943278 cli_runner.go:162] docker network inspect custom-weave-20210813002927-676638 returned with exit code 1
	I0813 00:42:10.384035  943278 network_create.go:258] error running [docker network inspect custom-weave-20210813002927-676638]: docker network inspect custom-weave-20210813002927-676638: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20210813002927-676638
	I0813 00:42:10.384065  943278 network_create.go:260] output of [docker network inspect custom-weave-20210813002927-676638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20210813002927-676638
	
	** /stderr **
	I0813 00:42:10.384121  943278 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 00:42:10.442035  943278 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006660a0] misses:0}
	I0813 00:42:10.442109  943278 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 00:42:10.442130  943278 network_create.go:106] attempt to create docker network custom-weave-20210813002927-676638 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 00:42:10.442186  943278 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210813002927-676638
	I0813 00:42:10.524163  943278 network_create.go:90] docker network custom-weave-20210813002927-676638 192.168.49.0/24 created
	I0813 00:42:10.524223  943278 kic.go:106] calculated static IP "192.168.49.2" for the "custom-weave-20210813002927-676638" container
	I0813 00:42:10.524301  943278 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 00:42:10.572562  943278 cli_runner.go:115] Run: docker volume create custom-weave-20210813002927-676638 --label name.minikube.sigs.k8s.io=custom-weave-20210813002927-676638 --label created_by.minikube.sigs.k8s.io=true
	I0813 00:42:10.626215  943278 oci.go:102] Successfully created a docker volume custom-weave-20210813002927-676638
	I0813 00:42:10.626327  943278 cli_runner.go:115] Run: docker run --rm --name custom-weave-20210813002927-676638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210813002927-676638 --entrypoint /usr/bin/test -v custom-weave-20210813002927-676638:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 00:42:11.472841  943278 oci.go:106] Successfully prepared a docker volume custom-weave-20210813002927-676638
	W0813 00:42:11.472921  943278 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 00:42:11.472930  943278 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 00:42:11.472989  943278 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 00:42:11.473002  943278 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:42:11.473041  943278 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 00:42:11.473113  943278 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210813002927-676638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 00:42:11.575489  943278 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20210813002927-676638 --name custom-weave-20210813002927-676638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210813002927-676638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20210813002927-676638 --network custom-weave-20210813002927-676638 --ip 192.168.49.2 --volume custom-weave-20210813002927-676638:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 00:42:12.109361  943278 cli_runner.go:115] Run: docker container inspect custom-weave-20210813002927-676638 --format={{.State.Running}}
	I0813 00:42:12.164816  943278 cli_runner.go:115] Run: docker container inspect custom-weave-20210813002927-676638 --format={{.State.Status}}
	I0813 00:42:12.224425  943278 cli_runner.go:115] Run: docker exec custom-weave-20210813002927-676638 stat /var/lib/dpkg/alternatives/iptables
	I0813 00:42:12.366981  943278 oci.go:278] the created container "custom-weave-20210813002927-676638" has a running status.
	I0813 00:42:12.367024  943278 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa...
	I0813 00:42:12.449300  943278 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 00:42:12.828839  943278 cli_runner.go:115] Run: docker container inspect custom-weave-20210813002927-676638 --format={{.State.Status}}
	I0813 00:42:12.880359  943278 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 00:42:12.880383  943278 kic_runner.go:115] Args: [docker exec --privileged custom-weave-20210813002927-676638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 00:42:10.301617  911032 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 00:42:10.301663  911032 addons.go:344] enableAddons completed in 3.298217614s
	I0813 00:42:11.716545  911032 pod_ready.go:102] pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:14.215433  911032 pod_ready.go:102] pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:14.717130  911032 pod_ready.go:92] pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:14.717160  911032 pod_ready.go:81] duration metric: took 7.014442275s waiting for pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.717175  911032 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.723459  911032 pod_ready.go:92] pod "etcd-embed-certs-20210813003107-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:14.723484  911032 pod_ready.go:81] duration metric: took 6.301103ms waiting for pod "etcd-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.723503  911032 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.734593  911032 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813003107-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:14.734618  911032 pod_ready.go:81] duration metric: took 11.104664ms waiting for pod "kube-apiserver-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.734634  911032 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.740018  911032 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813003107-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:14.740042  911032 pod_ready.go:81] duration metric: took 5.399184ms waiting for pod "kube-controller-manager-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.740056  911032 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bhdzr" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.792610  911032 pod_ready.go:92] pod "kube-proxy-bhdzr" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:14.792639  911032 pod_ready.go:81] duration metric: took 52.573174ms waiting for pod "kube-proxy-bhdzr" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.792661  911032 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:15.115317  911032 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813003107-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:15.115417  911032 pod_ready.go:81] duration metric: took 322.742497ms waiting for pod "kube-scheduler-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:15.115451  911032 pod_ready.go:38] duration metric: took 7.959259332s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:42:15.115504  911032 api_server.go:50] waiting for apiserver process to appear ...
	I0813 00:42:15.115558  911032 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:42:15.204047  911032 api_server.go:70] duration metric: took 8.200893471s to wait for apiserver process to appear ...
	I0813 00:42:15.204082  911032 api_server.go:86] waiting for apiserver healthz status ...
	I0813 00:42:15.204095  911032 api_server.go:239] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0813 00:42:15.211809  911032 api_server.go:265] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0813 00:42:15.212821  911032 api_server.go:139] control plane version: v1.21.3
	I0813 00:42:15.212849  911032 api_server.go:129] duration metric: took 8.76047ms to wait for apiserver health ...
	I0813 00:42:15.212861  911032 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 00:42:15.318466  911032 system_pods.go:59] 9 kube-system pods found
	I0813 00:42:15.318508  911032 system_pods.go:61] "coredns-558bd4d5db-9bdqj" [f2545cec-a503-40cb-9cb3-741144b6320a] Running
	I0813 00:42:15.318517  911032 system_pods.go:61] "etcd-embed-certs-20210813003107-676638" [ff764b4e-ac7c-47fa-a769-e739def8d075] Running
	I0813 00:42:15.318524  911032 system_pods.go:61] "kindnet-m9wdh" [ac041149-ee5c-4e8f-a58b-3a12b8d54cb5] Running
	I0813 00:42:15.318531  911032 system_pods.go:61] "kube-apiserver-embed-certs-20210813003107-676638" [576de72b-b823-41a9-bfdb-5b61496caf1b] Running
	I0813 00:42:15.318538  911032 system_pods.go:61] "kube-controller-manager-embed-certs-20210813003107-676638" [08a62edf-ce98-4d39-b176-29fb55369ef7] Running
	I0813 00:42:15.318543  911032 system_pods.go:61] "kube-proxy-bhdzr" [4c60eef0-dca5-406f-a24f-a62d85933b3e] Running
	I0813 00:42:15.318552  911032 system_pods.go:61] "kube-scheduler-embed-certs-20210813003107-676638" [d682ae8b-0cfc-4bea-a1c9-8e007f5468bc] Running
	I0813 00:42:15.318569  911032 system_pods.go:61] "metrics-server-7c784ccb57-d6wcs" [ca3671c5-bdeb-4af0-8e6c-3e69eddf7645] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 00:42:15.318583  911032 system_pods.go:61] "storage-provisioner" [850780eb-d4e0-457d-95fd-d3a046e8cac5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 00:42:15.318601  911032 system_pods.go:74] duration metric: took 105.728892ms to wait for pod list to return data ...
	I0813 00:42:15.318620  911032 default_sa.go:34] waiting for default service account to be created ...
	I0813 00:42:15.514317  911032 default_sa.go:45] found service account: "default"
	I0813 00:42:15.514356  911032 default_sa.go:55] duration metric: took 195.728599ms for default service account to be created ...
	I0813 00:42:15.514368  911032 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 00:42:15.718808  911032 system_pods.go:86] 9 kube-system pods found
	I0813 00:42:15.718847  911032 system_pods.go:89] "coredns-558bd4d5db-9bdqj" [f2545cec-a503-40cb-9cb3-741144b6320a] Running
	I0813 00:42:15.718856  911032 system_pods.go:89] "etcd-embed-certs-20210813003107-676638" [ff764b4e-ac7c-47fa-a769-e739def8d075] Running
	I0813 00:42:15.718861  911032 system_pods.go:89] "kindnet-m9wdh" [ac041149-ee5c-4e8f-a58b-3a12b8d54cb5] Running
	I0813 00:42:15.718867  911032 system_pods.go:89] "kube-apiserver-embed-certs-20210813003107-676638" [576de72b-b823-41a9-bfdb-5b61496caf1b] Running
	I0813 00:42:15.718875  911032 system_pods.go:89] "kube-controller-manager-embed-certs-20210813003107-676638" [08a62edf-ce98-4d39-b176-29fb55369ef7] Running
	I0813 00:42:15.718882  911032 system_pods.go:89] "kube-proxy-bhdzr" [4c60eef0-dca5-406f-a24f-a62d85933b3e] Running
	I0813 00:42:15.718889  911032 system_pods.go:89] "kube-scheduler-embed-certs-20210813003107-676638" [d682ae8b-0cfc-4bea-a1c9-8e007f5468bc] Running
	I0813 00:42:15.718903  911032 system_pods.go:89] "metrics-server-7c784ccb57-d6wcs" [ca3671c5-bdeb-4af0-8e6c-3e69eddf7645] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 00:42:15.718914  911032 system_pods.go:89] "storage-provisioner" [850780eb-d4e0-457d-95fd-d3a046e8cac5] Running
	I0813 00:42:15.718925  911032 system_pods.go:126] duration metric: took 204.550995ms to wait for k8s-apps to be running ...
	I0813 00:42:15.718937  911032 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 00:42:15.718990  911032 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:42:15.731278  911032 system_svc.go:56] duration metric: took 12.330165ms WaitForService to wait for kubelet.
	I0813 00:42:15.731316  911032 kubeadm.go:547] duration metric: took 8.728164828s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 00:42:15.731352  911032 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:42:15.915074  911032 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 00:42:15.915106  911032 node_conditions.go:123] node cpu capacity is 8
	I0813 00:42:15.915121  911032 node_conditions.go:105] duration metric: took 183.762685ms to run NodePressure ...
	I0813 00:42:15.915134  911032 start.go:231] waiting for startup goroutines ...
	I0813 00:42:15.974318  911032 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 00:42:15.978693  911032 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210813003107-676638" cluster and "default" namespace by default
	I0813 00:42:13.044747  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:15.545520  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:15.723544  943278 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210813002927-676638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.2503568s)
	I0813 00:42:15.723596  943278 kic.go:188] duration metric: took 4.250552 seconds to extract preloaded images to volume
	I0813 00:42:15.723760  943278 cli_runner.go:115] Run: docker container inspect custom-weave-20210813002927-676638 --format={{.State.Status}}
	I0813 00:42:15.771745  943278 machine.go:88] provisioning docker machine ...
	I0813 00:42:15.771791  943278 ubuntu.go:169] provisioning hostname "custom-weave-20210813002927-676638"
	I0813 00:42:15.771866  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:15.820129  943278 main.go:130] libmachine: Using SSH client type: native
	I0813 00:42:15.820371  943278 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33462 <nil> <nil>}
	I0813 00:42:15.820394  943278 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20210813002927-676638 && echo "custom-weave-20210813002927-676638" | sudo tee /etc/hostname
	I0813 00:42:15.974613  943278 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20210813002927-676638
	
	I0813 00:42:15.974692  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:16.033977  943278 main.go:130] libmachine: Using SSH client type: native
	I0813 00:42:16.034211  943278 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33462 <nil> <nil>}
	I0813 00:42:16.034253  943278 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20210813002927-676638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20210813002927-676638/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20210813002927-676638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:42:16.154105  943278 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:42:16.154143  943278 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:42:16.154170  943278 ubuntu.go:177] setting up certificates
	I0813 00:42:16.154185  943278 provision.go:83] configureAuth start
	I0813 00:42:16.154242  943278 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210813002927-676638
	I0813 00:42:16.206046  943278 provision.go:137] copyHostCerts
	I0813 00:42:16.206121  943278 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:42:16.206137  943278 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:42:16.206196  943278 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:42:16.206285  943278 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:42:16.206299  943278 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:42:16.206330  943278 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0813 00:42:16.206399  943278 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:42:16.206410  943278 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:42:16.206440  943278 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1082 bytes)
	I0813 00:42:16.206494  943278 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20210813002927-676638 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20210813002927-676638]
	I0813 00:42:16.332470  943278 provision.go:171] copyRemoteCerts
	I0813 00:42:16.332554  943278 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:42:16.332611  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:16.378621  943278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa Username:docker}
	I0813 00:42:16.465118  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 00:42:16.484423  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0813 00:42:16.504977  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 00:42:16.525339  943278 provision.go:86] duration metric: configureAuth took 371.135623ms
	I0813 00:42:16.525375  943278 ubuntu.go:193] setting minikube options for container-runtime
	I0813 00:42:16.525655  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:16.574228  943278 main.go:130] libmachine: Using SSH client type: native
	I0813 00:42:16.574402  943278 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33462 <nil> <nil>}
	I0813 00:42:16.574420  943278 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:42:17.003267  943278 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:42:17.003307  943278 machine.go:91] provisioned docker machine in 1.231536462s
	I0813 00:42:17.003318  943278 client.go:171] LocalClient.Create took 6.716216728s
	I0813 00:42:17.003331  943278 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20210813002927-676638" took 6.71627879s
	I0813 00:42:17.003342  943278 start.go:267] post-start starting for "custom-weave-20210813002927-676638" (driver="docker")
	I0813 00:42:17.003350  943278 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:42:17.003422  943278 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:42:17.003496  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:17.059533  943278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa Username:docker}
	I0813 00:42:17.145188  943278 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:42:17.148246  943278 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 00:42:17.148270  943278 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 00:42:17.148279  943278 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 00:42:17.148286  943278 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 00:42:17.148297  943278 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:42:17.148342  943278 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:42:17.148425  943278 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> 6766382.pem in /etc/ssl/certs
	I0813 00:42:17.148518  943278 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:42:17.156928  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:42:17.175894  943278 start.go:270] post-start completed in 172.53531ms
	I0813 00:42:17.176277  943278 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210813002927-676638
	I0813 00:42:17.224288  943278 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/config.json ...
	I0813 00:42:17.224577  943278 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:42:17.224649  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:17.270631  943278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa Username:docker}
	I0813 00:42:17.354311  943278 start.go:129] duration metric: createHost completed in 7.070464797s
	I0813 00:42:17.354343  943278 start.go:80] releasing machines lock for "custom-weave-20210813002927-676638", held for 7.070607291s
	I0813 00:42:17.354457  943278 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210813002927-676638
	I0813 00:42:17.402267  943278 ssh_runner.go:149] Run: systemctl --version
	I0813 00:42:17.402317  943278 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:42:17.402341  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:17.402375  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:17.451480  943278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa Username:docker}
	I0813 00:42:17.453352  943278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa Username:docker}
	I0813 00:42:17.567858  943278 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:42:17.589834  943278 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:42:17.601073  943278 docker.go:153] disabling docker service ...
	I0813 00:42:17.601137  943278 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:42:17.617960  943278 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:42:17.628279  943278 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:42:17.699996  943278 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:42:17.781667  943278 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:42:17.792750  943278 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:42:17.815999  943278 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 00:42:17.826734  943278 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:42:17.834452  943278 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:42:17.834516  943278 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:42:17.842782  943278 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:42:17.850510  943278 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:42:17.913519  943278 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:42:17.924339  943278 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:42:17.924416  943278 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:42:17.928353  943278 start.go:417] Will wait 60s for crictl version
	I0813 00:42:17.928420  943278 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:42:17.963042  943278 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 00:42:17.963141  943278 ssh_runner.go:149] Run: crio --version
	I0813 00:42:18.032191  943278 ssh_runner.go:149] Run: crio --version
	I0813 00:42:18.110974  943278 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 00:42:18.111073  943278 cli_runner.go:115] Run: docker network inspect custom-weave-20210813002927-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 00:42:18.155349  943278 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 00:42:18.159054  943278 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:42:18.169044  943278 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:42:18.169123  943278 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:42:18.218281  943278 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:42:18.218310  943278 crio.go:333] Images already preloaded, skipping extraction
	I0813 00:42:18.218361  943278 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:42:18.244765  943278 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:42:18.244795  943278 cache_images.go:74] Images are preloaded, skipping loading
	I0813 00:42:18.244888  943278 ssh_runner.go:149] Run: crio config
	I0813 00:42:18.327039  943278 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0813 00:42:18.327113  943278 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:42:18.327137  943278 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20210813002927-676638 NodeName:custom-weave-20210813002927-676638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:42:18.327341  943278 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "custom-weave-20210813002927-676638"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:42:18.327487  943278 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=custom-weave-20210813002927-676638 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813002927-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0813 00:42:18.327551  943278 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 00:42:18.335908  943278 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 00:42:18.336013  943278 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 00:42:18.343269  943278 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (565 bytes)
	I0813 00:42:18.356398  943278 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:42:18.369712  943278 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0813 00:42:18.383241  943278 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 00:42:18.386736  943278 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:42:18.396811  943278 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638 for IP: 192.168.49.2
	I0813 00:42:18.396878  943278 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:42:18.396902  943278 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:42:18.396969  943278 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.key
	I0813 00:42:18.396983  943278 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt with IP's: []
	I0813 00:42:18.535413  943278 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt ...
	I0813 00:42:18.535448  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: {Name:mk04cc89e6435cf8ec29a0b091a2a0469c20559a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.535684  943278 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.key ...
	I0813 00:42:18.535698  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.key: {Name:mk69fa105585e4b7038c5a95f83b3fe1554b58a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.535782  943278 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key.dd3b5fb2
	I0813 00:42:18.535792  943278 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 00:42:18.722598  943278 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt.dd3b5fb2 ...
	I0813 00:42:18.722635  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt.dd3b5fb2: {Name:mk04b4b491c73d758ec763ec105d7efba6aa0a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.722831  943278 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key.dd3b5fb2 ...
	I0813 00:42:18.722844  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key.dd3b5fb2: {Name:mkd6b9e6c1f422d3aa27a6b8b128c12ae067209b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.722926  943278 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt
	I0813 00:42:18.723029  943278 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key
	I0813 00:42:18.723091  943278 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.key
	I0813 00:42:18.723101  943278 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.crt with IP's: []
	I0813 00:42:18.842577  943278 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.crt ...
	I0813 00:42:18.842616  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.crt: {Name:mk6470976c9ef5b7d98d26083c3f6bda036e2bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.842842  943278 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.key ...
	I0813 00:42:18.842860  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.key: {Name:mka2f24e0b5f012d9d19b0af08f9a48823809e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.843078  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem (1338 bytes)
	W0813 00:42:18.843125  943278 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638_empty.pem, impossibly tiny 0 bytes
	I0813 00:42:18.843141  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 00:42:18.843180  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1082 bytes)
	I0813 00:42:18.843213  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:42:18.843251  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
	I0813 00:42:18.843323  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:42:18.844328  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 00:42:18.863518  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 00:42:18.908185  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 00:42:18.926326  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 00:42:18.944615  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:42:18.962995  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:42:18.981145  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:42:18.999255  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:42:19.016472  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:42:19.034845  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem --> /usr/share/ca-certificates/676638.pem (1338 bytes)
	I0813 00:42:19.053001  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /usr/share/ca-certificates/6766382.pem (1708 bytes)
	I0813 00:42:19.069864  943278 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 00:42:19.083542  943278 ssh_runner.go:149] Run: openssl version
	I0813 00:42:19.088683  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6766382.pem && ln -fs /usr/share/ca-certificates/6766382.pem /etc/ssl/certs/6766382.pem"
	I0813 00:42:19.097118  943278 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6766382.pem
	I0813 00:42:19.100507  943278 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 00:05 /usr/share/ca-certificates/6766382.pem
	I0813 00:42:19.100554  943278 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6766382.pem
	I0813 00:42:19.105591  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6766382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:42:19.113537  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:42:19.121304  943278 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:42:19.124411  943278 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:55 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:42:19.124471  943278 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:42:19.129901  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:42:19.137414  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/676638.pem && ln -fs /usr/share/ca-certificates/676638.pem /etc/ssl/certs/676638.pem"
	I0813 00:42:19.145366  943278 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/676638.pem
	I0813 00:42:19.148604  943278 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 00:05 /usr/share/ca-certificates/676638.pem
	I0813 00:42:19.148659  943278 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/676638.pem
	I0813 00:42:19.153737  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/676638.pem /etc/ssl/certs/51391683.0"
	I0813 00:42:19.161519  943278 kubeadm.go:390] StartCluster: {Name:custom-weave-20210813002927-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813002927-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:42:19.161629  943278 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 00:42:19.161703  943278 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 00:42:19.186425  943278 cri.go:76] found id: ""
	I0813 00:42:19.186506  943278 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 00:42:19.194155  943278 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 00:42:19.201720  943278 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 00:42:19.201795  943278 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 00:42:19.209500  943278 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 00:42:19.209559  943278 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 00:42:19.521891  943278 out.go:204]   - Generating certificates and keys ...
	I0813 00:42:18.044936  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:20.544523  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:22.544705  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:21.833918  943278 out.go:204]   - Booting up control plane ...
	I0813 00:42:25.044639  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:27.544508  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 00:36:27 UTC, end at Fri 2021-08-13 00:42:30 UTC. --
	Aug 13 00:42:11 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:11.250088016Z" level=info msg="Started container 040094321d48d6d2b00866b99d09b513dda22a6c18ee7ca6fd42018867148478: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-fqm5d/kubernetes-dashboard" id=6c8b6fd5-a44d-4060-963f-90071fdc9e8a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:42:11 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:11.419862721Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=fd0a4b25-1ed0-4e19-aa4b-596c3353a92a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:11 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:11.420163553Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=fd0a4b25-1ed0-4e19-aa4b-596c3353a92a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:11 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:11.930109588Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.041810063Z" level=info msg="Pulled image: k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=d97ddfd6-1651-4adc-ae41-0fa0d06409d5 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.042701436Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=8d0ca4a3-506d-4f9e-a29d-ceb70fef2a13 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.044534147Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8d0ca4a3-506d-4f9e-a29d-ceb70fef2a13 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.045605227Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=fb1e3589-b94f-4fea-a7b0-d053e72dd176 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.216807471Z" level=info msg="Created container 51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=fb1e3589-b94f-4fea-a7b0-d053e72dd176 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.217453342Z" level=info msg="Starting container: 51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8" id=7ce2e258-44a0-4475-812b-cf518925cf62 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.244869487Z" level=info msg="Started container 51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=7ce2e258-44a0-4475-812b-cf518925cf62 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.436919349Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=c58b1fe6-857d-4a31-8d9a-1a82f7674d1f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.438868257Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c58b1fe6-857d-4a31-8d9a-1a82f7674d1f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.440517454Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=c52f4b13-6234-4ab2-9313-793cecfff49e name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.442477186Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c52f4b13-6234-4ab2-9313-793cecfff49e name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.443298116Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=6a29536b-9869-437c-90a1-baa85863c628 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.604871188Z" level=info msg="Created container 45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=6a29536b-9869-437c-90a1-baa85863c628 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.605525037Z" level=info msg="Starting container: 45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00" id=dc99ba9e-e60e-443c-9b5c-1a0e1a3baa98 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.632270716Z" level=info msg="Started container 45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=dc99ba9e-e60e-443c-9b5c-1a0e1a3baa98 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:42:19 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:19.440233230Z" level=info msg="Removing container: 51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8" id=74d06218-3ca8-4cb7-9af9-a7fc34023dc6 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 00:42:19 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:19.481615203Z" level=info msg="Removed container 51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=74d06218-3ca8-4cb7-9af9-a7fc34023dc6 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 00:42:26 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:26.308170356Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=996372ff-6e2e-4e79-9a01-dfa4b3c873f1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:26 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:26.308402055Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=996372ff-6e2e-4e79-9a01-dfa4b3c873f1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:26 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:26.309043349Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=eaadf516-2e7a-4fac-900b-c15a2251338b name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 00:42:26 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:26.327310697Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	45854d5646e93       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   11 seconds ago      Exited              dashboard-metrics-scraper   1                   e6ce6988990c7
	040094321d48d       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   19 seconds ago      Running             kubernetes-dashboard        0                   86cffc3750db7
	65b03c21c2ec0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   19 seconds ago      Running             storage-provisioner         0                   cb925b12d495b
	ea689d9befa0c       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   21 seconds ago      Running             coredns                     0                   e64d84890eeb1
	9c14caeff902e       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   22 seconds ago      Running             kube-proxy                  0                   df99cf3e1ebfb
	01db811538c75       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   22 seconds ago      Running             kindnet-cni                 0                   fa30e69c4410e
	5c87a3142b07a       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   43 seconds ago      Running             kube-scheduler              0                   203736ee9a567
	6df37dc960612       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   43 seconds ago      Running             etcd                        0                   fce2af5d29500
	4e9effa00d573       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   43 seconds ago      Running             kube-controller-manager     0                   d854327c009d7
	29aee76a5223f       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   43 seconds ago      Running             kube-apiserver              0                   407478dc20340
	
	* 
	* ==> coredns [ea689d9befa0c599dd0052be7df63a63f89b17670c14e1c45dbb4df86d8898b4] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210813003107-676638
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20210813003107-676638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=embed-certs-20210813003107-676638
	                    minikube.k8s.io/updated_at=2021_08_13T00_41_54_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 00:41:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20210813003107-676638
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 00:42:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 00:42:29 +0000   Fri, 13 Aug 2021 00:41:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 00:42:29 +0000   Fri, 13 Aug 2021 00:41:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 00:42:29 +0000   Fri, 13 Aug 2021 00:41:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 00:42:29 +0000   Fri, 13 Aug 2021 00:42:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-20210813003107-676638
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 760e67beb8554645829f2357c8eb4ae7
	  System UUID:                b9cf02f7-a0a3-4d65-9e41-ca1ecfa1ffa6
	  Boot ID:                    f12e4c71-5c79-4cb7-b9de-5d4c99f61cf1
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-9bdqj                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     24s
	  kube-system                 etcd-embed-certs-20210813003107-676638                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         31s
	  kube-system                 kindnet-m9wdh                                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      24s
	  kube-system                 kube-apiserver-embed-certs-20210813003107-676638             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-controller-manager-embed-certs-20210813003107-676638    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-proxy-bhdzr                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 kube-scheduler-embed-certs-20210813003107-676638             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 metrics-server-7c784ccb57-d6wcs                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         21s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-tbr4l                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-fqm5d                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  45s (x5 over 45s)  kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x5 over 45s)  kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x5 over 45s)  kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             31s                kubelet     Node embed-certs-20210813003107-676638 status is now: NodeNotReady
	  Normal  NodeReady                24s                kubelet     Node embed-certs-20210813003107-676638 status is now: NodeReady
	  Normal  Starting                 21s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +3.583653] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c83c1e95b109
	[  +0.000003] ll header: 00000000: 02 42 6b e2 99 34 02 42 c0 a8 5e 02 08 00        .Bk..4.B..^...
	[  +5.503929] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c83c1e95b109
	[  +0.000003] ll header: 00000000: 02 42 6b e2 99 34 02 42 c0 a8 5e 02 08 00        .Bk..4.B..^...
	[  +1.407633] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c83c1e95b109
	[  +0.000003] ll header: 00000000: 02 42 6b e2 99 34 02 42 c0 a8 5e 02 08 00        .Bk..4.B..^...
	[  +0.709913] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 52 9a bb 7e fa a0 08 06        ......R..~....
	[  +0.000003] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 52 9a bb 7e fa a0 08 06        ......R..~....
	[ +13.113088] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c83c1e95b109
	[  +0.000002] ll header: 00000000: 02 42 6b e2 99 34 02 42 c0 a8 5e 02 08 00        .Bk..4.B..^...
	[  +1.437183] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth0567c5b4
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 06 48 0f 1a 6b ec 08 06        .......H..k...
	[  +3.912036] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vetha6038f6d
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 46 17 bb d7 24 a8 08 06        ......F...$...
	[Aug13 00:42] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethb2574714
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7e 15 80 76 73 6b 08 06        ......~..vsk..
	[  +0.547641] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethedbfb420
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 82 cc db 01 6c 04 08 06        ..........l...
	[  +0.108325] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth383fe7ef
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff be f7 08 35 56 4e 08 06        .........5VN..
	[  +1.315066] cgroup: cgroup2: unknown option "nsdelegate"
	[  +8.350581] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c83c1e95b109
	[  +0.000003] ll header: 00000000: 02 42 6b e2 99 34 02 42 c0 a8 5e 02 08 00        .Bk..4.B..^...
	
	* 
	* ==> etcd [6df37dc960612aba9ced21bd1d7e1bc1ecddb932504c447d3ba7d8b184b0bac1] <==
	* raft2021/08/13 00:41:46 INFO: dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)
	2021-08-13 00:41:46.499455 W | auth: simple token is not cryptographically signed
	2021-08-13 00:41:46.505387 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-13 00:41:46.505670 I | etcdserver: dfc97eb0aae75b33 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 00:41:46 INFO: dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)
	2021-08-13 00:41:46.506223 I | etcdserver/membership: added member dfc97eb0aae75b33 [https://192.168.94.2:2380] to cluster da400bbece288f5a
	2021-08-13 00:41:46.507832 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 00:41:46.507939 I | embed: listening for peers on 192.168.94.2:2380
	2021-08-13 00:41:46.508004 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 00:41:47 INFO: dfc97eb0aae75b33 is starting a new election at term 1
	raft2021/08/13 00:41:47 INFO: dfc97eb0aae75b33 became candidate at term 2
	raft2021/08/13 00:41:47 INFO: dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2
	raft2021/08/13 00:41:47 INFO: dfc97eb0aae75b33 became leader at term 2
	raft2021/08/13 00:41:47 INFO: raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2
	2021-08-13 00:41:47.196994 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 00:41:47.197016 I | embed: ready to serve client requests
	2021-08-13 00:41:47.197068 I | etcdserver: published {Name:embed-certs-20210813003107-676638 ClientURLs:[https://192.168.94.2:2379]} to cluster da400bbece288f5a
	2021-08-13 00:41:47.197082 I | embed: ready to serve client requests
	2021-08-13 00:41:47.197714 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 00:41:47.197784 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 00:41:47.199500 I | embed: serving client requests on 192.168.94.2:2379
	2021-08-13 00:41:47.199675 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 00:42:07.296033 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:42:13.699430 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:42:23.699599 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  00:42:30 up  4:25,  0 users,  load average: 1.59, 1.93, 2.19
	Linux embed-certs-20210813003107-676638 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [29aee76a5223f84d79250f1a8d80fff6f9d8438997790d2000c77ab873587a02] <==
	* I0813 00:41:50.603239       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 00:41:50.603664       1 cache.go:39] Caches are synced for autoregister controller
	I0813 00:41:51.502242       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 00:41:51.502272       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 00:41:51.506766       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 00:41:51.512271       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 00:41:51.512296       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 00:41:51.995760       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 00:41:52.027120       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 00:41:52.121476       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0813 00:41:52.122556       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 00:41:52.126159       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 00:41:53.151189       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 00:41:53.725180       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 00:41:53.798806       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 00:41:59.108974       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 00:42:06.660045       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 00:42:06.708838       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0813 00:42:11.791736       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 00:42:11.791846       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 00:42:11.791860       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 00:42:28.566293       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:42:28.566341       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:42:28.566349       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [4e9effa00d57335d8efdfe7b670b81bda098522d05271bc3e229257b0c283ab0] <==
	* I0813 00:42:09.014144       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0813 00:42:09.101341       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0813 00:42:09.112448       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 00:42:09.194805       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-d6wcs"
	I0813 00:42:09.712823       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 00:42:09.721683       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 00:42:09.724716       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 00:42:09.793971       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.796570       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.800879       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 00:42:09.804593       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.807440       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.809617       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.809619       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.810219       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.810273       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.893461       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.893845       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.899025       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.899118       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 00:42:09.900976       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.900410       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.918825       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-fqm5d"
	I0813 00:42:09.997639       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-tbr4l"
	I0813 00:42:11.004795       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [9c14caeff902e39d3266b8a18e769d2c677e22b928256cfd576ad3e8125aefdf] <==
	* I0813 00:42:08.715395       1 node.go:172] Successfully retrieved node IP: 192.168.94.2
	I0813 00:42:08.715466       1 server_others.go:140] Detected node IP 192.168.94.2
	W0813 00:42:08.715497       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 00:42:09.105453       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 00:42:09.105542       1 server_others.go:212] Using iptables Proxier.
	I0813 00:42:09.105557       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 00:42:09.105590       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 00:42:09.106434       1 server.go:643] Version: v1.21.3
	I0813 00:42:09.107265       1 config.go:315] Starting service config controller
	I0813 00:42:09.107365       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 00:42:09.107461       1 config.go:224] Starting endpoint slice config controller
	I0813 00:42:09.107476       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 00:42:09.195011       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 00:42:09.197411       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 00:42:09.212834       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 00:42:09.213071       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5c87a3142b07a277f04ffd2c844dab82a7e565d4805d5bd2ed00ec41b8d37d2e] <==
	* I0813 00:41:50.604517       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 00:41:50.604582       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 00:41:50.607503       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 00:41:50.607739       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:50.607946       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 00:41:50.608040       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:50.608120       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:50.608267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 00:41:50.608268       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:50.609112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 00:41:50.609191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:41:50.609411       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 00:41:50.609563       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 00:41:50.609920       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 00:41:50.609996       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 00:41:50.610141       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 00:41:51.490510       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:51.590449       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:41:51.615229       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 00:41:51.690771       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:51.725348       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 00:41:51.767535       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 00:41:51.823986       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:51.971043       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0813 00:41:54.904756       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 00:36:27 UTC, end at Fri 2021-08-13 00:42:30 UTC. --
	Aug 13 00:42:09 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:09.923688    5686 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:10.008915    5686 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:10.107413    5686 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a208d674-9151-445a-8368-919815e63b5a-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-fqm5d\" (UID: \"a208d674-9151-445a-8368-919815e63b5a\") "
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:10.107481    5686 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp5v9\" (UniqueName: \"kubernetes.io/projected/a208d674-9151-445a-8368-919815e63b5a-kube-api-access-dp5v9\") pod \"kubernetes-dashboard-6fcdf4f6d-fqm5d\" (UID: \"a208d674-9151-445a-8368-919815e63b5a\") "
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:10.209688    5686 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/52069898-1ec9-4b87-a7c5-aae9fcf1301a-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-tbr4l\" (UID: \"52069898-1ec9-4b87-a7c5-aae9fcf1301a\") "
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:10.209787    5686 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7s9h\" (UniqueName: \"kubernetes.io/projected/52069898-1ec9-4b87-a7c5-aae9fcf1301a-kube-api-access-t7s9h\") pod \"dashboard-metrics-scraper-8685c45546-tbr4l\" (UID: \"52069898-1ec9-4b87-a7c5-aae9fcf1301a\") "
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:10.618748    5686 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:10.618813    5686 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:10.619005    5686 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-86g22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-d6wcs_kube-system(ca3671c5-bdeb-4af0-8e6c-3e69eddf7645): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:10.619058    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-d6wcs" podUID=ca3671c5-bdeb-4af0-8e6c-3e69eddf7645
	Aug 13 00:42:11 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:11.420397    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-d6wcs" podUID=ca3671c5-bdeb-4af0-8e6c-3e69eddf7645
	Aug 13 00:42:18 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:18.436356    5686 scope.go:111] "RemoveContainer" containerID="51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8"
	Aug 13 00:42:19 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:19.439219    5686 scope.go:111] "RemoveContainer" containerID="51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8"
	Aug 13 00:42:19 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:19.439348    5686 scope.go:111] "RemoveContainer" containerID="45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00"
	Aug 13 00:42:19 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:19.439734    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-tbr4l_kubernetes-dashboard(52069898-1ec9-4b87-a7c5-aae9fcf1301a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l" podUID=52069898-1ec9-4b87-a7c5-aae9fcf1301a
	Aug 13 00:42:19 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:19.635767    5686 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/docker/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:42:20 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:20.442689    5686 scope.go:111] "RemoveContainer" containerID="45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00"
	Aug 13 00:42:20 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:20.443066    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-tbr4l_kubernetes-dashboard(52069898-1ec9-4b87-a7c5-aae9fcf1301a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l" podUID=52069898-1ec9-4b87-a7c5-aae9fcf1301a
	Aug 13 00:42:21 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:21.444617    5686 scope.go:111] "RemoveContainer" containerID="45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00"
	Aug 13 00:42:21 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:21.445016    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-tbr4l_kubernetes-dashboard(52069898-1ec9-4b87-a7c5-aae9fcf1301a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l" podUID=52069898-1ec9-4b87-a7c5-aae9fcf1301a
	Aug 13 00:42:26 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:26.333311    5686 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 00:42:26 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:26.333373    5686 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 00:42:26 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:26.333549    5686 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-86g22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-d6wcs_kube-system(ca3671c5-bdeb-4af0-8e6c-3e69eddf7645): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host
	Aug 13 00:42:26 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:26.333606    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-d6wcs" podUID=ca3671c5-bdeb-4af0-8e6c-3e69eddf7645
	Aug 13 00:42:29 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:29.741608    5686 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/docker/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1\": RecentStats: unable to find data in memory cache]"
	
	* 
	* ==> kubernetes-dashboard [040094321d48d6d2b00866b99d09b513dda22a6c18ee7ca6fd42018867148478] <==
	* 2021/08/13 00:42:11 Starting overwatch
	2021/08/13 00:42:11 Using namespace: kubernetes-dashboard
	2021/08/13 00:42:11 Using in-cluster config to connect to apiserver
	2021/08/13 00:42:11 Using secret token for csrf signing
	2021/08/13 00:42:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 00:42:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 00:42:11 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 00:42:11 Generating JWE encryption key
	2021/08/13 00:42:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 00:42:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 00:42:11 Initializing JWE encryption key from synchronized object
	2021/08/13 00:42:11 Creating in-cluster Sidecar client
	2021/08/13 00:42:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 00:42:11 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [65b03c21c2ec0eb764237745134555f63f7405afd3ba24ceb8dfe552ccdb23af] <==
	* I0813 00:42:10.433359       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 00:42:10.493520       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 00:42:10.493671       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 00:42:10.501645       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 00:42:10.501836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813003107-676638_e668cfe8-4b95-45c6-9d37-a325554b1ca5!
	I0813 00:42:10.501814       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f38e255c-929c-4864-8155-18f2a84213bd", APIVersion:"v1", ResourceVersion:"584", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210813003107-676638_e668cfe8-4b95-45c6-9d37-a325554b1ca5 became leader
	I0813 00:42:10.603029       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813003107-676638_e668cfe8-4b95-45c6-9d37-a325554b1ca5!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813003107-676638 -n embed-certs-20210813003107-676638
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20210813003107-676638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-d6wcs
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210813003107-676638 describe pod metrics-server-7c784ccb57-d6wcs
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20210813003107-676638 describe pod metrics-server-7c784ccb57-d6wcs: exit status 1 (79.289062ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-d6wcs" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20210813003107-676638 describe pod metrics-server-7c784ccb57-d6wcs: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20210813003107-676638
helpers_test.go:236: (dbg) docker inspect embed-certs-20210813003107-676638:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1",
	        "Created": "2021-08-13T00:34:27.125416283Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 911309,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T00:36:27.05449191Z",
	            "FinishedAt": "2021-08-13T00:36:24.417108562Z"
	        },
	        "Image": "sha256:8768eddc4356afffe3e639d96dfedd92c4546269e9e4366ab52cf09f53c80b71",
	        "ResolvConfPath": "/var/lib/docker/containers/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/hosts",
	        "LogPath": "/var/lib/docker/containers/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1-json.log",
	        "Name": "/embed-certs-20210813003107-676638",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20210813003107-676638:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20210813003107-676638",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/45f753af1b8d453e1829ac66a826b1c26542343a3cee2ec3f5d9a77de7aa47f7-init/diff:/var/lib/docker/overlay2/dbcccdfd1d8030c8fd84392abd0651a1c83d85eef1664675f19095ba94d0669c/diff:/var/lib/docker/overlay2/48560ccfa5a167568c6c277306b75040147fa803b45938da98f999b9b34770ec/diff:/var/lib/docker/overlay2/82edbb53b45859b009a31b65fc937517517994e9f7f2b61ab6a2cd9b5d793ea6/diff:/var/lib/docker/overlay2/67407816da0f4fc9226789b4471160bc847b978aa567cac46bd77c492c2e0bd8/diff:/var/lib/docker/overlay2/56fdeb530def71ef2955d22a7a9769b93f1dfc06d3e44e40ff20fce371d47e93/diff:/var/lib/docker/overlay2/eb0df517e10831d2d369ffbcc40b44f2ae8a39b1845697429224cb9ee96aef88/diff:/var/lib/docker/overlay2/0f2b796a50d0eef34622b78f6ac5a1b4914163a3b69965848a40245456d0a358/diff:/var/lib/docker/overlay2/fbdca95cd15a30d761b8949a28398e3694f3cd5af4e11f01066b8aa89ab0e223/diff:/var/lib/docker/overlay2/94899cbf3c806327e740cdc8b542a92bcf6e487ba93ab006749e9b13198b697a/diff:/var/lib/docker/overlay2/26a7c8
74215c711e77443c1fe264e626d5672e0127f6210b657baea90dc79adb/diff:/var/lib/docker/overlay2/16bd4fd277923e4600e9bd3819ae284053a099ab01e82d41f29752792664be0e/diff:/var/lib/docker/overlay2/7309f9c878e5d24824d68bef540877dc63f2d4c0745de5d0bf7f09e2a65c4600/diff:/var/lib/docker/overlay2/69de2b4390e19f2dda71ecf7c7fef7a9c01fabcf86a7c439a2919ae1284c8de6/diff:/var/lib/docker/overlay2/0ff6ec4f8c21672b1a77bd0033b8786942a677b797ffa1c0fbbb8e03a13d84ed/diff:/var/lib/docker/overlay2/d672d17598d05d9daa3eddac9f958d6913ebfccf431eb143f1f3903b89d150a9/diff:/var/lib/docker/overlay2/0f5d711484163b1b3f60dd6126d6daa0154c241a003764ef80e81d73d68b3ed6/diff:/var/lib/docker/overlay2/d3e7cb92a45651117204153d8d9bc8490164e7c8f439d0c6d157aebf680816ae/diff:/var/lib/docker/overlay2/4b81367fe927507da6730098aedd39a4bd58482dacc101a1dd66f191161dce2d/diff:/var/lib/docker/overlay2/5e9324cbc949319d8647c63cf76f1776a9474d1b961f604c7d87daeb7ebb111d/diff:/var/lib/docker/overlay2/010e1940f131233ee479e023b64f3d26d5b8444f44686cc3f0f1508d966a3c37/diff:/var/lib/d
ocker/overlay2/842ba2e088d8e8cdfa07950eb5be4178d7c22d5932419eb6881e2551df6383d1/diff:/var/lib/docker/overlay2/5a3a00a19445c1d8b4de2bac2fee0c796647356d04b424b1a92c252905d279b0/diff:/var/lib/docker/overlay2/fe2f56e2617a01ef886be702089b24e7058e63d8e824252c67d4c1a0879ad160/diff:/var/lib/docker/overlay2/38b35bcc55b3c7019af7c73f5eed6e0fc162e93a9f9dc7005f87a42358887784/diff:/var/lib/docker/overlay2/d9c894d408f003f4a81d231415f823e9936440a1ee3195405f2fa88b29cd4174/diff:/var/lib/docker/overlay2/1f809a5b11bbef9de3b7711ec341e3852caa4fd2c21e59015b082ae96980b66a/diff:/var/lib/docker/overlay2/99b8edcd10c58a9d6dc18c04bc3d78ee5e078fd13677063e50d0f8b7cd484f8e/diff:/var/lib/docker/overlay2/b7e659e3e24c55bbbb4560a609f06959cff67515ccfed5251eb149eb25e46066/diff:/var/lib/docker/overlay2/cd8af3183f19e2c4a376399788541c30ba2531a85eeecf9fe11864d329a507d9/diff:/var/lib/docker/overlay2/84813126d4751fc1c3f21d3f70102678caac8153899dc8a5e0af833842e299a8/diff:/var/lib/docker/overlay2/2a328079a8a98d312436a8d89f7b47dde7400fe0357b71b76ed6bc760f8
0f741/diff:/var/lib/docker/overlay2/68fb29110f487206a1dee378747a2f3ef1c89149c9968662726587ea918839d7/diff:/var/lib/docker/overlay2/f9baf28d86b9d2aa6fbb47eab690cb3a8a89d77efe26a5f0c73e8f86bce7214f/diff:/var/lib/docker/overlay2/dad436e2a201d537bbbd0e375ec55a078b79dad52ee0a39399e1d1044bef8136/diff:/var/lib/docker/overlay2/4c5f3abd2b3256b532e1522df89aaca80508afb850fe2081fd29e234ecc52a3c/diff:/var/lib/docker/overlay2/abd7c1d6e94e128091e4cd7c4e2b418a6e7f40430fa8e22724424ee318edfaa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/45f753af1b8d453e1829ac66a826b1c26542343a3cee2ec3f5d9a77de7aa47f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/45f753af1b8d453e1829ac66a826b1c26542343a3cee2ec3f5d9a77de7aa47f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/45f753af1b8d453e1829ac66a826b1c26542343a3cee2ec3f5d9a77de7aa47f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20210813003107-676638",
	                "Source": "/var/lib/docker/volumes/embed-certs-20210813003107-676638/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20210813003107-676638",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20210813003107-676638",
	                "name.minikube.sigs.k8s.io": "embed-certs-20210813003107-676638",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3c13ebe772652424ec897ff3791fae9f68f6f35c3382eb04ec476c0e2bed18be",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3c13ebe77265",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20210813003107-676638": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f6d10369f2ec"
	                    ],
	                    "NetworkID": "c83c1e95b109d6be271c44cd1a18ca39647f3071af7bf6561f51b9b8be426451",
	                    "EndpointID": "9583045cc74d797afcc2fa9e57f14b58e176f7c2fd07691535f5370dda986ea4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813003107-676638 -n embed-certs-20210813003107-676638
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210813003107-676638 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20210813003107-676638 logs -n 25: (1.047673558s)
helpers_test.go:253: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                      |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                                   | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:36:03 UTC | Fri, 13 Aug 2021 00:36:03 UTC |
	|         | embed-certs-20210813003107-676638                          |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:36:03 UTC | Fri, 13 Aug 2021 00:36:25 UTC |
	|         | embed-certs-20210813003107-676638                          |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:36:25 UTC | Fri, 13 Aug 2021 00:36:25 UTC |
	|         | embed-certs-20210813003107-676638                          |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:32:54 UTC | Fri, 13 Aug 2021 00:38:42 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:38:53 UTC | Fri, 13 Aug 2021 00:38:53 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:38:53 UTC | Fri, 13 Aug 2021 00:38:54 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:38:54 UTC | Fri, 13 Aug 2021 00:38:55 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:38:56 UTC | Fri, 13 Aug 2021 00:39:00 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210813003110-676638 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:39:00 UTC | Fri, 13 Aug 2021 00:39:01 UTC |
	|         | default-k8s-different-port-20210813003110-676638           |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20210813003901-676638 --memory=2200          | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:39:01 UTC | Fri, 13 Aug 2021 00:39:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:39:51 UTC | Fri, 13 Aug 2021 00:39:51 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                  |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                  |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:39:51 UTC | Fri, 13 Aug 2021 00:40:12 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                  |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:12 UTC | Fri, 13 Aug 2021 00:40:12 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                  |         |         |                               |                               |
	| start   | -p newest-cni-20210813003901-676638 --memory=2200          | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:12 UTC | Fri, 13 Aug 2021 00:40:38 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                  |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                  |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                  |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=crio                  |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:38 UTC | Fri, 13 Aug 2021 00:40:39 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:39 UTC | Fri, 13 Aug 2021 00:40:39 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:40 UTC | Fri, 13 Aug 2021 00:40:41 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:41 UTC | Fri, 13 Aug 2021 00:40:45 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20210813003901-676638                 | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:45 UTC | Fri, 13 Aug 2021 00:40:46 UTC |
	|         | newest-cni-20210813003901-676638                           |                                                  |         |         |                               |                               |
	| start   | -p auto-20210813002925-676638                              | auto-20210813002925-676638                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:40:46 UTC | Fri, 13 Aug 2021 00:41:55 UTC |
	|         | --memory=2048                                              |                                                  |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                                  |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                  |         |         |                               |                               |
	| ssh     | -p auto-20210813002925-676638                              | auto-20210813002925-676638                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:41:55 UTC | Fri, 13 Aug 2021 00:41:56 UTC |
	|         | pgrep -a kubelet                                           |                                                  |         |         |                               |                               |
	| delete  | -p auto-20210813002925-676638                              | auto-20210813002925-676638                       | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:42:06 UTC | Fri, 13 Aug 2021 00:42:09 UTC |
	| start   | -p                                                         | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:36:25 UTC | Fri, 13 Aug 2021 00:42:15 UTC |
	|         | embed-certs-20210813003107-676638                          |                                                  |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                  |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                  |         |         |                               |                               |
	|         | --driver=docker                                            |                                                  |         |         |                               |                               |
	|         | --container-runtime=crio                                   |                                                  |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                  |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:42:26 UTC | Fri, 13 Aug 2021 00:42:26 UTC |
	|         | embed-certs-20210813003107-676638                          |                                                  |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                  |         |         |                               |                               |
	| -p      | embed-certs-20210813003107-676638                          | embed-certs-20210813003107-676638                | jenkins | v1.22.0 | Fri, 13 Aug 2021 00:42:29 UTC | Fri, 13 Aug 2021 00:42:30 UTC |
	|         | logs -n 25                                                 |                                                  |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 00:42:09
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 00:42:09.814445  943278 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:42:09.814557  943278 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:42:09.814566  943278 out.go:311] Setting ErrFile to fd 2...
	I0813 00:42:09.814571  943278 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:42:09.814702  943278 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:42:09.815004  943278 out.go:305] Setting JSON to false
	I0813 00:42:09.855737  943278 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":15891,"bootTime":1628799438,"procs":336,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:42:09.855857  943278 start.go:121] virtualization: kvm guest
	I0813 00:42:09.858371  943278 out.go:177] * [custom-weave-20210813002927-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:42:09.860165  943278 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:42:09.858534  943278 notify.go:169] Checking for updates...
	I0813 00:42:09.861966  943278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:42:09.863758  943278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:42:09.865538  943278 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:42:09.866368  943278 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:42:09.939678  943278 docker.go:132] docker version: linux-19.03.15
	I0813 00:42:09.939803  943278 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:42:10.049698  943278 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-13 00:42:09.9797783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:42:10.049787  943278 docker.go:244] overlay module found
	I0813 00:42:10.052658  943278 out.go:177] * Using the docker driver based on user configuration
	I0813 00:42:10.052691  943278 start.go:278] selected driver: docker
	I0813 00:42:10.052698  943278 start.go:751] validating driver "docker" against <nil>
	I0813 00:42:10.052719  943278 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 00:42:10.052774  943278 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:42:10.052792  943278 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 00:42:10.054605  943278 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:42:10.055560  943278 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:42:10.165323  943278 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-13 00:42:10.10973775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:42:10.165485  943278 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 00:42:10.165637  943278 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 00:42:10.165659  943278 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0813 00:42:10.165677  943278 start_flags.go:272] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0813 00:42:10.165686  943278 start_flags.go:277] config:
	{Name:custom-weave-20210813002927-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813002927-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:42:10.168179  943278 out.go:177] * Starting control plane node custom-weave-20210813002927-676638 in cluster custom-weave-20210813002927-676638
	I0813 00:42:10.168238  943278 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 00:42:07.086898  911032 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0813 00:42:07.086972  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0813 00:42:07.086985  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0813 00:42:07.087035  911032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813003107-676638
	I0813 00:42:07.111629  911032 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210813003107-676638"
	W0813 00:42:07.111676  911032 addons.go:147] addon default-storageclass should already be in state true
	I0813 00:42:07.111713  911032 host.go:66] Checking if "embed-certs-20210813003107-676638" exists ...
	I0813 00:42:07.112265  911032 cli_runner.go:115] Run: docker container inspect embed-certs-20210813003107-676638 --format={{.State.Status}}
	I0813 00:42:07.141052  911032 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210813003107-676638" to be "Ready" ...
	I0813 00:42:07.141541  911032 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 00:42:07.156131  911032 node_ready.go:49] node "embed-certs-20210813003107-676638" has status "Ready":"True"
	I0813 00:42:07.156158  911032 node_ready.go:38] duration metric: took 15.06995ms waiting for node "embed-certs-20210813003107-676638" to be "Ready" ...
	I0813 00:42:07.156172  911032 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:42:07.161432  911032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813003107-676638/id_rsa Username:docker}
	I0813 00:42:07.163140  911032 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-5tbf8" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:07.170142  911032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813003107-676638/id_rsa Username:docker}
	I0813 00:42:07.170737  911032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813003107-676638/id_rsa Username:docker}
	I0813 00:42:07.179801  911032 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 00:42:07.179827  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 00:42:07.179881  911032 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20210813003107-676638
	I0813 00:42:07.242728  911032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813003107-676638/id_rsa Username:docker}
	I0813 00:42:07.413470  911032 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:42:07.414525  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0813 00:42:07.414549  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0813 00:42:07.494634  911032 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 00:42:07.494662  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0813 00:42:07.513155  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0813 00:42:07.513196  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0813 00:42:07.590359  911032 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 00:42:07.590406  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 00:42:07.612495  911032 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 00:42:07.702652  911032 pod_ready.go:97] error getting pod "coredns-558bd4d5db-5tbf8" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-5tbf8" not found
	I0813 00:42:07.702685  911032 pod_ready.go:81] duration metric: took 539.513964ms waiting for pod "coredns-558bd4d5db-5tbf8" in "kube-system" namespace to be "Ready" ...
	E0813 00:42:07.702700  911032 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-5tbf8" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-5tbf8" not found
	I0813 00:42:07.702709  911032 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:07.722434  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0813 00:42:07.722467  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0813 00:42:07.795457  911032 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 00:42:07.795486  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 00:42:07.815964  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0813 00:42:07.815995  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0813 00:42:07.828521  911032 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 00:42:07.920836  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0813 00:42:07.920868  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0813 00:42:08.007454  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0813 00:42:08.007486  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0813 00:42:08.016468  911032 start.go:736] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0813 00:42:08.107832  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0813 00:42:08.107865  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0813 00:42:08.214566  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0813 00:42:08.214602  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0813 00:42:08.310259  911032 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 00:42:08.310293  911032 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0813 00:42:08.506021  911032 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0813 00:42:08.819783  911032 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.406216686s)
	I0813 00:42:08.819858  911032 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.207323507s)
	I0813 00:42:09.403819  911032 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.575244401s)
	I0813 00:42:09.403863  911032 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210813003107-676638"
	I0813 00:42:09.715781  911032 pod_ready.go:102] pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:10.169914  943278 out.go:177] * Pulling base image ...
	I0813 00:42:10.169991  943278 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:42:10.170041  943278 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 00:42:10.170050  943278 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 00:42:10.170056  943278 cache.go:56] Caching tarball of preloaded images
	I0813 00:42:10.170408  943278 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 00:42:10.170444  943278 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 00:42:10.170566  943278 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/config.json ...
	I0813 00:42:10.170587  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/config.json: {Name:mkc852494605645a232bde25ec20290f1d76e998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:10.283460  943278 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 00:42:10.283499  943278 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 00:42:10.283514  943278 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:42:10.283557  943278 start.go:313] acquiring machines lock for custom-weave-20210813002927-676638: {Name:mk111d3a4b930a37a53e0c69523046f37729edd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:42:10.283720  943278 start.go:317] acquired machines lock for "custom-weave-20210813002927-676638" in 142.145µs
	I0813 00:42:10.283753  943278 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20210813002927-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813002927-676638 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:42:10.283828  943278 start.go:126] createHost starting for "" (driver="docker")
	I0813 00:42:10.298925  911032 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.792845826s)
	I0813 00:42:08.545000  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:10.545533  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:10.286743  943278 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 00:42:10.287053  943278 start.go:160] libmachine.API.Create for "custom-weave-20210813002927-676638" (driver="docker")
	I0813 00:42:10.287090  943278 client.go:168] LocalClient.Create starting
	I0813 00:42:10.287181  943278 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 00:42:10.287212  943278 main.go:130] libmachine: Decoding PEM data...
	I0813 00:42:10.287232  943278 main.go:130] libmachine: Parsing certificate...
	I0813 00:42:10.287359  943278 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 00:42:10.287377  943278 main.go:130] libmachine: Decoding PEM data...
	I0813 00:42:10.287387  943278 main.go:130] libmachine: Parsing certificate...
	I0813 00:42:10.287710  943278 cli_runner.go:115] Run: docker network inspect custom-weave-20210813002927-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 00:42:10.338481  943278 cli_runner.go:162] docker network inspect custom-weave-20210813002927-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 00:42:10.338559  943278 network_create.go:255] running [docker network inspect custom-weave-20210813002927-676638] to gather additional debugging logs...
	I0813 00:42:10.338580  943278 cli_runner.go:115] Run: docker network inspect custom-weave-20210813002927-676638
	W0813 00:42:10.383997  943278 cli_runner.go:162] docker network inspect custom-weave-20210813002927-676638 returned with exit code 1
	I0813 00:42:10.384035  943278 network_create.go:258] error running [docker network inspect custom-weave-20210813002927-676638]: docker network inspect custom-weave-20210813002927-676638: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20210813002927-676638
	I0813 00:42:10.384065  943278 network_create.go:260] output of [docker network inspect custom-weave-20210813002927-676638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20210813002927-676638
	
	** /stderr **
	I0813 00:42:10.384121  943278 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 00:42:10.442035  943278 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006660a0] misses:0}
	I0813 00:42:10.442109  943278 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 00:42:10.442130  943278 network_create.go:106] attempt to create docker network custom-weave-20210813002927-676638 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 00:42:10.442186  943278 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20210813002927-676638
	I0813 00:42:10.524163  943278 network_create.go:90] docker network custom-weave-20210813002927-676638 192.168.49.0/24 created
	I0813 00:42:10.524223  943278 kic.go:106] calculated static IP "192.168.49.2" for the "custom-weave-20210813002927-676638" container
	I0813 00:42:10.524301  943278 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 00:42:10.572562  943278 cli_runner.go:115] Run: docker volume create custom-weave-20210813002927-676638 --label name.minikube.sigs.k8s.io=custom-weave-20210813002927-676638 --label created_by.minikube.sigs.k8s.io=true
	I0813 00:42:10.626215  943278 oci.go:102] Successfully created a docker volume custom-weave-20210813002927-676638
	I0813 00:42:10.626327  943278 cli_runner.go:115] Run: docker run --rm --name custom-weave-20210813002927-676638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210813002927-676638 --entrypoint /usr/bin/test -v custom-weave-20210813002927-676638:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 00:42:11.472841  943278 oci.go:106] Successfully prepared a docker volume custom-weave-20210813002927-676638
	W0813 00:42:11.472921  943278 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 00:42:11.472930  943278 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 00:42:11.472989  943278 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 00:42:11.473002  943278 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:42:11.473041  943278 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 00:42:11.473113  943278 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210813002927-676638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 00:42:11.575489  943278 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20210813002927-676638 --name custom-weave-20210813002927-676638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20210813002927-676638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20210813002927-676638 --network custom-weave-20210813002927-676638 --ip 192.168.49.2 --volume custom-weave-20210813002927-676638:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 00:42:12.109361  943278 cli_runner.go:115] Run: docker container inspect custom-weave-20210813002927-676638 --format={{.State.Running}}
	I0813 00:42:12.164816  943278 cli_runner.go:115] Run: docker container inspect custom-weave-20210813002927-676638 --format={{.State.Status}}
	I0813 00:42:12.224425  943278 cli_runner.go:115] Run: docker exec custom-weave-20210813002927-676638 stat /var/lib/dpkg/alternatives/iptables
	I0813 00:42:12.366981  943278 oci.go:278] the created container "custom-weave-20210813002927-676638" has a running status.
	I0813 00:42:12.367024  943278 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa...
	I0813 00:42:12.449300  943278 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 00:42:12.828839  943278 cli_runner.go:115] Run: docker container inspect custom-weave-20210813002927-676638 --format={{.State.Status}}
	I0813 00:42:12.880359  943278 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 00:42:12.880383  943278 kic_runner.go:115] Args: [docker exec --privileged custom-weave-20210813002927-676638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 00:42:10.301617  911032 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0813 00:42:10.301663  911032 addons.go:344] enableAddons completed in 3.298217614s
	I0813 00:42:11.716545  911032 pod_ready.go:102] pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:14.215433  911032 pod_ready.go:102] pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:14.717130  911032 pod_ready.go:92] pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:14.717160  911032 pod_ready.go:81] duration metric: took 7.014442275s waiting for pod "coredns-558bd4d5db-9bdqj" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.717175  911032 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.723459  911032 pod_ready.go:92] pod "etcd-embed-certs-20210813003107-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:14.723484  911032 pod_ready.go:81] duration metric: took 6.301103ms waiting for pod "etcd-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.723503  911032 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.734593  911032 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210813003107-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:14.734618  911032 pod_ready.go:81] duration metric: took 11.104664ms waiting for pod "kube-apiserver-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.734634  911032 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.740018  911032 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210813003107-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:14.740042  911032 pod_ready.go:81] duration metric: took 5.399184ms waiting for pod "kube-controller-manager-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.740056  911032 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bhdzr" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.792610  911032 pod_ready.go:92] pod "kube-proxy-bhdzr" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:14.792639  911032 pod_ready.go:81] duration metric: took 52.573174ms waiting for pod "kube-proxy-bhdzr" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:14.792661  911032 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:15.115317  911032 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210813003107-676638" in "kube-system" namespace has status "Ready":"True"
	I0813 00:42:15.115417  911032 pod_ready.go:81] duration metric: took 322.742497ms waiting for pod "kube-scheduler-embed-certs-20210813003107-676638" in "kube-system" namespace to be "Ready" ...
	I0813 00:42:15.115451  911032 pod_ready.go:38] duration metric: took 7.959259332s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:42:15.115504  911032 api_server.go:50] waiting for apiserver process to appear ...
	I0813 00:42:15.115558  911032 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:42:15.204047  911032 api_server.go:70] duration metric: took 8.200893471s to wait for apiserver process to appear ...
	I0813 00:42:15.204082  911032 api_server.go:86] waiting for apiserver healthz status ...
	I0813 00:42:15.204095  911032 api_server.go:239] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0813 00:42:15.211809  911032 api_server.go:265] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0813 00:42:15.212821  911032 api_server.go:139] control plane version: v1.21.3
	I0813 00:42:15.212849  911032 api_server.go:129] duration metric: took 8.76047ms to wait for apiserver health ...
	I0813 00:42:15.212861  911032 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 00:42:15.318466  911032 system_pods.go:59] 9 kube-system pods found
	I0813 00:42:15.318508  911032 system_pods.go:61] "coredns-558bd4d5db-9bdqj" [f2545cec-a503-40cb-9cb3-741144b6320a] Running
	I0813 00:42:15.318517  911032 system_pods.go:61] "etcd-embed-certs-20210813003107-676638" [ff764b4e-ac7c-47fa-a769-e739def8d075] Running
	I0813 00:42:15.318524  911032 system_pods.go:61] "kindnet-m9wdh" [ac041149-ee5c-4e8f-a58b-3a12b8d54cb5] Running
	I0813 00:42:15.318531  911032 system_pods.go:61] "kube-apiserver-embed-certs-20210813003107-676638" [576de72b-b823-41a9-bfdb-5b61496caf1b] Running
	I0813 00:42:15.318538  911032 system_pods.go:61] "kube-controller-manager-embed-certs-20210813003107-676638" [08a62edf-ce98-4d39-b176-29fb55369ef7] Running
	I0813 00:42:15.318543  911032 system_pods.go:61] "kube-proxy-bhdzr" [4c60eef0-dca5-406f-a24f-a62d85933b3e] Running
	I0813 00:42:15.318552  911032 system_pods.go:61] "kube-scheduler-embed-certs-20210813003107-676638" [d682ae8b-0cfc-4bea-a1c9-8e007f5468bc] Running
	I0813 00:42:15.318569  911032 system_pods.go:61] "metrics-server-7c784ccb57-d6wcs" [ca3671c5-bdeb-4af0-8e6c-3e69eddf7645] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 00:42:15.318583  911032 system_pods.go:61] "storage-provisioner" [850780eb-d4e0-457d-95fd-d3a046e8cac5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 00:42:15.318601  911032 system_pods.go:74] duration metric: took 105.728892ms to wait for pod list to return data ...
	I0813 00:42:15.318620  911032 default_sa.go:34] waiting for default service account to be created ...
	I0813 00:42:15.514317  911032 default_sa.go:45] found service account: "default"
	I0813 00:42:15.514356  911032 default_sa.go:55] duration metric: took 195.728599ms for default service account to be created ...
	I0813 00:42:15.514368  911032 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 00:42:15.718808  911032 system_pods.go:86] 9 kube-system pods found
	I0813 00:42:15.718847  911032 system_pods.go:89] "coredns-558bd4d5db-9bdqj" [f2545cec-a503-40cb-9cb3-741144b6320a] Running
	I0813 00:42:15.718856  911032 system_pods.go:89] "etcd-embed-certs-20210813003107-676638" [ff764b4e-ac7c-47fa-a769-e739def8d075] Running
	I0813 00:42:15.718861  911032 system_pods.go:89] "kindnet-m9wdh" [ac041149-ee5c-4e8f-a58b-3a12b8d54cb5] Running
	I0813 00:42:15.718867  911032 system_pods.go:89] "kube-apiserver-embed-certs-20210813003107-676638" [576de72b-b823-41a9-bfdb-5b61496caf1b] Running
	I0813 00:42:15.718875  911032 system_pods.go:89] "kube-controller-manager-embed-certs-20210813003107-676638" [08a62edf-ce98-4d39-b176-29fb55369ef7] Running
	I0813 00:42:15.718882  911032 system_pods.go:89] "kube-proxy-bhdzr" [4c60eef0-dca5-406f-a24f-a62d85933b3e] Running
	I0813 00:42:15.718889  911032 system_pods.go:89] "kube-scheduler-embed-certs-20210813003107-676638" [d682ae8b-0cfc-4bea-a1c9-8e007f5468bc] Running
	I0813 00:42:15.718903  911032 system_pods.go:89] "metrics-server-7c784ccb57-d6wcs" [ca3671c5-bdeb-4af0-8e6c-3e69eddf7645] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 00:42:15.718914  911032 system_pods.go:89] "storage-provisioner" [850780eb-d4e0-457d-95fd-d3a046e8cac5] Running
	I0813 00:42:15.718925  911032 system_pods.go:126] duration metric: took 204.550995ms to wait for k8s-apps to be running ...
	I0813 00:42:15.718937  911032 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 00:42:15.718990  911032 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:42:15.731278  911032 system_svc.go:56] duration metric: took 12.330165ms WaitForService to wait for kubelet.
	I0813 00:42:15.731316  911032 kubeadm.go:547] duration metric: took 8.728164828s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 00:42:15.731352  911032 node_conditions.go:102] verifying NodePressure condition ...
	I0813 00:42:15.915074  911032 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0813 00:42:15.915106  911032 node_conditions.go:123] node cpu capacity is 8
	I0813 00:42:15.915121  911032 node_conditions.go:105] duration metric: took 183.762685ms to run NodePressure ...
	I0813 00:42:15.915134  911032 start.go:231] waiting for startup goroutines ...
	I0813 00:42:15.974318  911032 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0813 00:42:15.978693  911032 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210813003107-676638" cluster and "default" namespace by default
	I0813 00:42:13.044747  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:15.545520  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:15.723544  943278 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20210813002927-676638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.2503568s)
	I0813 00:42:15.723596  943278 kic.go:188] duration metric: took 4.250552 seconds to extract preloaded images to volume
	I0813 00:42:15.723760  943278 cli_runner.go:115] Run: docker container inspect custom-weave-20210813002927-676638 --format={{.State.Status}}
	I0813 00:42:15.771745  943278 machine.go:88] provisioning docker machine ...
	I0813 00:42:15.771791  943278 ubuntu.go:169] provisioning hostname "custom-weave-20210813002927-676638"
	I0813 00:42:15.771866  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:15.820129  943278 main.go:130] libmachine: Using SSH client type: native
	I0813 00:42:15.820371  943278 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33462 <nil> <nil>}
	I0813 00:42:15.820394  943278 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20210813002927-676638 && echo "custom-weave-20210813002927-676638" | sudo tee /etc/hostname
	I0813 00:42:15.974613  943278 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20210813002927-676638
	
	I0813 00:42:15.974692  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:16.033977  943278 main.go:130] libmachine: Using SSH client type: native
	I0813 00:42:16.034211  943278 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33462 <nil> <nil>}
	I0813 00:42:16.034253  943278 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20210813002927-676638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20210813002927-676638/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20210813002927-676638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:42:16.154105  943278 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:42:16.154143  943278 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:42:16.154170  943278 ubuntu.go:177] setting up certificates
	I0813 00:42:16.154185  943278 provision.go:83] configureAuth start
	I0813 00:42:16.154242  943278 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210813002927-676638
	I0813 00:42:16.206046  943278 provision.go:137] copyHostCerts
	I0813 00:42:16.206121  943278 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:42:16.206137  943278 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:42:16.206196  943278 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:42:16.206285  943278 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:42:16.206299  943278 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:42:16.206330  943278 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0813 00:42:16.206399  943278 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:42:16.206410  943278 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:42:16.206440  943278 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1082 bytes)
	I0813 00:42:16.206494  943278 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20210813002927-676638 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20210813002927-676638]
	I0813 00:42:16.332470  943278 provision.go:171] copyRemoteCerts
	I0813 00:42:16.332554  943278 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:42:16.332611  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:16.378621  943278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa Username:docker}
	I0813 00:42:16.465118  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 00:42:16.484423  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0813 00:42:16.504977  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 00:42:16.525339  943278 provision.go:86] duration metric: configureAuth took 371.135623ms
	I0813 00:42:16.525375  943278 ubuntu.go:193] setting minikube options for container-runtime
	I0813 00:42:16.525655  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:16.574228  943278 main.go:130] libmachine: Using SSH client type: native
	I0813 00:42:16.574402  943278 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33462 <nil> <nil>}
	I0813 00:42:16.574420  943278 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:42:17.003267  943278 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:42:17.003307  943278 machine.go:91] provisioned docker machine in 1.231536462s
	I0813 00:42:17.003318  943278 client.go:171] LocalClient.Create took 6.716216728s
	I0813 00:42:17.003331  943278 start.go:168] duration metric: libmachine.API.Create for "custom-weave-20210813002927-676638" took 6.71627879s
	I0813 00:42:17.003342  943278 start.go:267] post-start starting for "custom-weave-20210813002927-676638" (driver="docker")
	I0813 00:42:17.003350  943278 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:42:17.003422  943278 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:42:17.003496  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:17.059533  943278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa Username:docker}
	I0813 00:42:17.145188  943278 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:42:17.148246  943278 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 00:42:17.148270  943278 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 00:42:17.148279  943278 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 00:42:17.148286  943278 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 00:42:17.148297  943278 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:42:17.148342  943278 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:42:17.148425  943278 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> 6766382.pem in /etc/ssl/certs
	I0813 00:42:17.148518  943278 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:42:17.156928  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:42:17.175894  943278 start.go:270] post-start completed in 172.53531ms
	I0813 00:42:17.176277  943278 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210813002927-676638
	I0813 00:42:17.224288  943278 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/config.json ...
	I0813 00:42:17.224577  943278 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:42:17.224649  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:17.270631  943278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa Username:docker}
	I0813 00:42:17.354311  943278 start.go:129] duration metric: createHost completed in 7.070464797s
	I0813 00:42:17.354343  943278 start.go:80] releasing machines lock for "custom-weave-20210813002927-676638", held for 7.070607291s
	I0813 00:42:17.354457  943278 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20210813002927-676638
	I0813 00:42:17.402267  943278 ssh_runner.go:149] Run: systemctl --version
	I0813 00:42:17.402317  943278 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:42:17.402341  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:17.402375  943278 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20210813002927-676638
	I0813 00:42:17.451480  943278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa Username:docker}
	I0813 00:42:17.453352  943278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/custom-weave-20210813002927-676638/id_rsa Username:docker}
	I0813 00:42:17.567858  943278 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:42:17.589834  943278 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:42:17.601073  943278 docker.go:153] disabling docker service ...
	I0813 00:42:17.601137  943278 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:42:17.617960  943278 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:42:17.628279  943278 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:42:17.699996  943278 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:42:17.781667  943278 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:42:17.792750  943278 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:42:17.815999  943278 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 00:42:17.826734  943278 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:42:17.834452  943278 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:42:17.834516  943278 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:42:17.842782  943278 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:42:17.850510  943278 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:42:17.913519  943278 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:42:17.924339  943278 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:42:17.924416  943278 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:42:17.928353  943278 start.go:417] Will wait 60s for crictl version
	I0813 00:42:17.928420  943278 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:42:17.963042  943278 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 00:42:17.963141  943278 ssh_runner.go:149] Run: crio --version
	I0813 00:42:18.032191  943278 ssh_runner.go:149] Run: crio --version
	I0813 00:42:18.110974  943278 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 00:42:18.111073  943278 cli_runner.go:115] Run: docker network inspect custom-weave-20210813002927-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 00:42:18.155349  943278 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 00:42:18.159054  943278 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:42:18.169044  943278 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:42:18.169123  943278 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:42:18.218281  943278 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:42:18.218310  943278 crio.go:333] Images already preloaded, skipping extraction
	I0813 00:42:18.218361  943278 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:42:18.244765  943278 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:42:18.244795  943278 cache_images.go:74] Images are preloaded, skipping loading
	I0813 00:42:18.244888  943278 ssh_runner.go:149] Run: crio config
	I0813 00:42:18.327039  943278 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0813 00:42:18.327113  943278 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:42:18.327137  943278 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20210813002927-676638 NodeName:custom-weave-20210813002927-676638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:42:18.327341  943278 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "custom-weave-20210813002927-676638"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:42:18.327487  943278 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=custom-weave-20210813002927-676638 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813002927-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0813 00:42:18.327551  943278 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 00:42:18.335908  943278 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 00:42:18.336013  943278 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 00:42:18.343269  943278 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (565 bytes)
	I0813 00:42:18.356398  943278 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:42:18.369712  943278 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0813 00:42:18.383241  943278 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 00:42:18.386736  943278 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:42:18.396811  943278 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638 for IP: 192.168.49.2
	I0813 00:42:18.396878  943278 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:42:18.396902  943278 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:42:18.396969  943278 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.key
	I0813 00:42:18.396983  943278 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt with IP's: []
	I0813 00:42:18.535413  943278 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt ...
	I0813 00:42:18.535448  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: {Name:mk04cc89e6435cf8ec29a0b091a2a0469c20559a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.535684  943278 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.key ...
	I0813 00:42:18.535698  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.key: {Name:mk69fa105585e4b7038c5a95f83b3fe1554b58a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.535782  943278 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key.dd3b5fb2
	I0813 00:42:18.535792  943278 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 00:42:18.722598  943278 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt.dd3b5fb2 ...
	I0813 00:42:18.722635  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt.dd3b5fb2: {Name:mk04b4b491c73d758ec763ec105d7efba6aa0a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.722831  943278 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key.dd3b5fb2 ...
	I0813 00:42:18.722844  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key.dd3b5fb2: {Name:mkd6b9e6c1f422d3aa27a6b8b128c12ae067209b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.722926  943278 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt
	I0813 00:42:18.723029  943278 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key
	I0813 00:42:18.723091  943278 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.key
	I0813 00:42:18.723101  943278 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.crt with IP's: []
	I0813 00:42:18.842577  943278 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.crt ...
	I0813 00:42:18.842616  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.crt: {Name:mk6470976c9ef5b7d98d26083c3f6bda036e2bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.842842  943278 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.key ...
	I0813 00:42:18.842860  943278 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.key: {Name:mka2f24e0b5f012d9d19b0af08f9a48823809e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:42:18.843078  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem (1338 bytes)
	W0813 00:42:18.843125  943278 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638_empty.pem, impossibly tiny 0 bytes
	I0813 00:42:18.843141  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 00:42:18.843180  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1082 bytes)
	I0813 00:42:18.843213  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:42:18.843251  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
	I0813 00:42:18.843323  943278 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:42:18.844328  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 00:42:18.863518  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 00:42:18.908185  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 00:42:18.926326  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 00:42:18.944615  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:42:18.962995  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:42:18.981145  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:42:18.999255  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:42:19.016472  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:42:19.034845  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem --> /usr/share/ca-certificates/676638.pem (1338 bytes)
	I0813 00:42:19.053001  943278 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /usr/share/ca-certificates/6766382.pem (1708 bytes)
	I0813 00:42:19.069864  943278 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 00:42:19.083542  943278 ssh_runner.go:149] Run: openssl version
	I0813 00:42:19.088683  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6766382.pem && ln -fs /usr/share/ca-certificates/6766382.pem /etc/ssl/certs/6766382.pem"
	I0813 00:42:19.097118  943278 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6766382.pem
	I0813 00:42:19.100507  943278 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 00:05 /usr/share/ca-certificates/6766382.pem
	I0813 00:42:19.100554  943278 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6766382.pem
	I0813 00:42:19.105591  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6766382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:42:19.113537  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:42:19.121304  943278 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:42:19.124411  943278 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:55 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:42:19.124471  943278 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:42:19.129901  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:42:19.137414  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/676638.pem && ln -fs /usr/share/ca-certificates/676638.pem /etc/ssl/certs/676638.pem"
	I0813 00:42:19.145366  943278 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/676638.pem
	I0813 00:42:19.148604  943278 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 00:05 /usr/share/ca-certificates/676638.pem
	I0813 00:42:19.148659  943278 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/676638.pem
	I0813 00:42:19.153737  943278 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/676638.pem /etc/ssl/certs/51391683.0"
	I0813 00:42:19.161519  943278 kubeadm.go:390] StartCluster: {Name:custom-weave-20210813002927-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:custom-weave-20210813002927-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:42:19.161629  943278 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 00:42:19.161703  943278 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 00:42:19.186425  943278 cri.go:76] found id: ""
	I0813 00:42:19.186506  943278 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 00:42:19.194155  943278 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 00:42:19.201720  943278 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 00:42:19.201795  943278 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 00:42:19.209500  943278 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 00:42:19.209559  943278 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 00:42:19.521891  943278 out.go:204]   - Generating certificates and keys ...
	I0813 00:42:18.044936  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:20.544523  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:22.544705  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:21.833918  943278 out.go:204]   - Booting up control plane ...
	I0813 00:42:25.044639  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	I0813 00:42:27.544508  894487 pod_ready.go:102] pod "metrics-server-8546d8b77b-5jdp6" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Logs begin at Fri 2021-08-13 00:36:27 UTC, end at Fri 2021-08-13 00:42:32 UTC. --
	Aug 13 00:42:11 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:11.250088016Z" level=info msg="Started container 040094321d48d6d2b00866b99d09b513dda22a6c18ee7ca6fd42018867148478: kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d-fqm5d/kubernetes-dashboard" id=6c8b6fd5-a44d-4060-963f-90071fdc9e8a name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:42:11 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:11.419862721Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=fd0a4b25-1ed0-4e19-aa4b-596c3353a92a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:11 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:11.420163553Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=fd0a4b25-1ed0-4e19-aa4b-596c3353a92a name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:11 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:11.930109588Z" level=info msg="Trying to access \"k8s.gcr.io/echoserver:1.4\""
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.041810063Z" level=info msg="Pulled image: k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=d97ddfd6-1651-4adc-ae41-0fa0d06409d5 name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.042701436Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=8d0ca4a3-506d-4f9e-a29d-ceb70fef2a13 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.044534147Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8d0ca4a3-506d-4f9e-a29d-ceb70fef2a13 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.045605227Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=fb1e3589-b94f-4fea-a7b0-d053e72dd176 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.216807471Z" level=info msg="Created container 51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=fb1e3589-b94f-4fea-a7b0-d053e72dd176 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.217453342Z" level=info msg="Starting container: 51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8" id=7ce2e258-44a0-4475-812b-cf518925cf62 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.244869487Z" level=info msg="Started container 51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=7ce2e258-44a0-4475-812b-cf518925cf62 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.436919349Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=c58b1fe6-857d-4a31-8d9a-1a82f7674d1f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.438868257Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c58b1fe6-857d-4a31-8d9a-1a82f7674d1f name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.440517454Z" level=info msg="Checking image status: k8s.gcr.io/echoserver:1.4" id=c52f4b13-6234-4ab2-9313-793cecfff49e name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.442477186Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[k8s.gcr.io/echoserver:1.4],RepoDigests:[k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c52f4b13-6234-4ab2-9313-793cecfff49e name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.443298116Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=6a29536b-9869-437c-90a1-baa85863c628 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.604871188Z" level=info msg="Created container 45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=6a29536b-9869-437c-90a1-baa85863c628 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.605525037Z" level=info msg="Starting container: 45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00" id=dc99ba9e-e60e-443c-9b5c-1a0e1a3baa98 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:42:18 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:18.632270716Z" level=info msg="Started container 45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=dc99ba9e-e60e-443c-9b5c-1a0e1a3baa98 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 13 00:42:19 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:19.440233230Z" level=info msg="Removing container: 51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8" id=74d06218-3ca8-4cb7-9af9-a7fc34023dc6 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 00:42:19 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:19.481615203Z" level=info msg="Removed container 51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8: kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l/dashboard-metrics-scraper" id=74d06218-3ca8-4cb7-9af9-a7fc34023dc6 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 13 00:42:26 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:26.308170356Z" level=info msg="Checking image status: fake.domain/k8s.gcr.io/echoserver:1.4" id=996372ff-6e2e-4e79-9a01-dfa4b3c873f1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:26 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:26.308402055Z" level=info msg="Image fake.domain/k8s.gcr.io/echoserver:1.4 not found" id=996372ff-6e2e-4e79-9a01-dfa4b3c873f1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 13 00:42:26 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:26.309043349Z" level=info msg="Pulling image: fake.domain/k8s.gcr.io/echoserver:1.4" id=eaadf516-2e7a-4fac-900b-c15a2251338b name=/runtime.v1alpha2.ImageService/PullImage
	Aug 13 00:42:26 embed-certs-20210813003107-676638 crio[397]: time="2021-08-13 00:42:26.327310697Z" level=info msg="Trying to access \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID
	45854d5646e93       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7   13 seconds ago      Exited              dashboard-metrics-scraper   1                   e6ce6988990c7
	040094321d48d       9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db   21 seconds ago      Running             kubernetes-dashboard        0                   86cffc3750db7
	65b03c21c2ec0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   21 seconds ago      Running             storage-provisioner         0                   cb925b12d495b
	ea689d9befa0c       296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899   23 seconds ago      Running             coredns                     0                   e64d84890eeb1
	9c14caeff902e       adb2816ea823a9eef18ab4768bcb11f799030ceb4334a79253becc45fa6cce92   24 seconds ago      Running             kube-proxy                  0                   df99cf3e1ebfb
	01db811538c75       6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb   24 seconds ago      Running             kindnet-cni                 0                   fa30e69c4410e
	5c87a3142b07a       6be0dc1302e30439f8ad5d898279d7dbb1a08fb10a6c49d3379192bf2454428a   45 seconds ago      Running             kube-scheduler              0                   203736ee9a567
	6df37dc960612       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   45 seconds ago      Running             etcd                        0                   fce2af5d29500
	4e9effa00d573       bc2bb319a7038a40a08b2ec2e412a9600b0b1a542aea85c3348fa9813c01d8e9   45 seconds ago      Running             kube-controller-manager     0                   d854327c009d7
	29aee76a5223f       3d174f00aa39eb8552a9596610d87ae90e0ad51ad5282bd5dae421ca7d4a0b80   45 seconds ago      Running             kube-apiserver              0                   407478dc20340
	
	* 
	* ==> coredns [ea689d9befa0c599dd0052be7df63a63f89b17670c14e1c45dbb4df86d8898b4] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20210813003107-676638
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20210813003107-676638
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=embed-certs-20210813003107-676638
	                    minikube.k8s.io/updated_at=2021_08_13T00_41_54_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 00:41:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20210813003107-676638
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 00:42:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 00:42:29 +0000   Fri, 13 Aug 2021 00:41:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 00:42:29 +0000   Fri, 13 Aug 2021 00:41:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 00:42:29 +0000   Fri, 13 Aug 2021 00:41:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 00:42:29 +0000   Fri, 13 Aug 2021 00:42:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-20210813003107-676638
	Capacity:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  309568300Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32951368Ki
	  pods:               110
	System Info:
	  Machine ID:                 760e67beb8554645829f2357c8eb4ae7
	  System UUID:                b9cf02f7-a0a3-4d65-9e41-ca1ecfa1ffa6
	  Boot ID:                    f12e4c71-5c79-4cb7-b9de-5d4c99f61cf1
	  Kernel Version:             4.9.0-16-amd64
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.20.3
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-9bdqj                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     26s
	  kube-system                 etcd-embed-certs-20210813003107-676638                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         33s
	  kube-system                 kindnet-m9wdh                                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      26s
	  kube-system                 kube-apiserver-embed-certs-20210813003107-676638             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-controller-manager-embed-certs-20210813003107-676638    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-proxy-bhdzr                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-scheduler-embed-certs-20210813003107-676638             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 metrics-server-7c784ccb57-d6wcs                              100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         23s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kubernetes-dashboard        dashboard-metrics-scraper-8685c45546-tbr4l                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kubernetes-dashboard        kubernetes-dashboard-6fcdf4f6d-fqm5d                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             520Mi (1%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  47s (x5 over 47s)  kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x5 over 47s)  kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x5 over 47s)  kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s                kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet     Node embed-certs-20210813003107-676638 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             33s                kubelet     Node embed-certs-20210813003107-676638 status is now: NodeNotReady
	  Normal  NodeReady                26s                kubelet     Node embed-certs-20210813003107-676638 status is now: NodeReady
	  Normal  Starting                 23s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +3.583653] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c83c1e95b109
	[  +0.000003] ll header: 00000000: 02 42 6b e2 99 34 02 42 c0 a8 5e 02 08 00        .Bk..4.B..^...
	[  +5.503929] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c83c1e95b109
	[  +0.000003] ll header: 00000000: 02 42 6b e2 99 34 02 42 c0 a8 5e 02 08 00        .Bk..4.B..^...
	[  +1.407633] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c83c1e95b109
	[  +0.000003] ll header: 00000000: 02 42 6b e2 99 34 02 42 c0 a8 5e 02 08 00        .Bk..4.B..^...
	[  +0.709913] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 52 9a bb 7e fa a0 08 06        ......R..~....
	[  +0.000003] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 52 9a bb 7e fa a0 08 06        ......R..~....
	[ +13.113088] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c83c1e95b109
	[  +0.000002] ll header: 00000000: 02 42 6b e2 99 34 02 42 c0 a8 5e 02 08 00        .Bk..4.B..^...
	[  +1.437183] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth0567c5b4
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 06 48 0f 1a 6b ec 08 06        .......H..k...
	[  +3.912036] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vetha6038f6d
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 46 17 bb d7 24 a8 08 06        ......F...$...
	[Aug13 00:42] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethb2574714
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7e 15 80 76 73 6b 08 06        ......~..vsk..
	[  +0.547641] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethedbfb420
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 82 cc db 01 6c 04 08 06        ..........l...
	[  +0.108325] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev veth383fe7ef
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff be f7 08 35 56 4e 08 06        .........5VN..
	[  +1.315066] cgroup: cgroup2: unknown option "nsdelegate"
	[  +8.350581] IPv4: martian source 10.244.0.2 from 10.96.0.1, on dev br-c83c1e95b109
	[  +0.000003] ll header: 00000000: 02 42 6b e2 99 34 02 42 c0 a8 5e 02 08 00        .Bk..4.B..^...
	
	* 
	* ==> etcd [6df37dc960612aba9ced21bd1d7e1bc1ecddb932504c447d3ba7d8b184b0bac1] <==
	* raft2021/08/13 00:41:46 INFO: dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)
	2021-08-13 00:41:46.499455 W | auth: simple token is not cryptographically signed
	2021-08-13 00:41:46.505387 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-13 00:41:46.505670 I | etcdserver: dfc97eb0aae75b33 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/13 00:41:46 INFO: dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)
	2021-08-13 00:41:46.506223 I | etcdserver/membership: added member dfc97eb0aae75b33 [https://192.168.94.2:2380] to cluster da400bbece288f5a
	2021-08-13 00:41:46.507832 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 00:41:46.507939 I | embed: listening for peers on 192.168.94.2:2380
	2021-08-13 00:41:46.508004 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2021/08/13 00:41:47 INFO: dfc97eb0aae75b33 is starting a new election at term 1
	raft2021/08/13 00:41:47 INFO: dfc97eb0aae75b33 became candidate at term 2
	raft2021/08/13 00:41:47 INFO: dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2
	raft2021/08/13 00:41:47 INFO: dfc97eb0aae75b33 became leader at term 2
	raft2021/08/13 00:41:47 INFO: raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2
	2021-08-13 00:41:47.196994 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 00:41:47.197016 I | embed: ready to serve client requests
	2021-08-13 00:41:47.197068 I | etcdserver: published {Name:embed-certs-20210813003107-676638 ClientURLs:[https://192.168.94.2:2379]} to cluster da400bbece288f5a
	2021-08-13 00:41:47.197082 I | embed: ready to serve client requests
	2021-08-13 00:41:47.197714 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 00:41:47.197784 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 00:41:47.199500 I | embed: serving client requests on 192.168.94.2:2379
	2021-08-13 00:41:47.199675 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 00:42:07.296033 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:42:13.699430 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 00:42:23.699599 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  00:42:32 up  4:25,  0 users,  load average: 2.50, 2.11, 2.25
	Linux embed-certs-20210813003107-676638 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [29aee76a5223f84d79250f1a8d80fff6f9d8438997790d2000c77ab873587a02] <==
	* I0813 00:41:50.603239       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0813 00:41:50.603664       1 cache.go:39] Caches are synced for autoregister controller
	I0813 00:41:51.502242       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0813 00:41:51.502272       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0813 00:41:51.506766       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0813 00:41:51.512271       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0813 00:41:51.512296       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0813 00:41:51.995760       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0813 00:41:52.027120       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0813 00:41:52.121476       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0813 00:41:52.122556       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 00:41:52.126159       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 00:41:53.151189       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0813 00:41:53.725180       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0813 00:41:53.798806       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0813 00:41:59.108974       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0813 00:42:06.660045       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 00:42:06.708838       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0813 00:42:11.791736       1 handler_proxy.go:102] no RequestInfo found in the context
	E0813 00:42:11.791846       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0813 00:42:11.791860       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0813 00:42:28.566293       1 client.go:360] parsed scheme: "passthrough"
	I0813 00:42:28.566341       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 00:42:28.566349       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [4e9effa00d57335d8efdfe7b670b81bda098522d05271bc3e229257b0c283ab0] <==
	* I0813 00:42:09.014144       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0813 00:42:09.101341       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0813 00:42:09.112448       1 replica_set.go:532] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0813 00:42:09.194805       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-d6wcs"
	I0813 00:42:09.712823       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0813 00:42:09.721683       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 00:42:09.724716       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0813 00:42:09.793971       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.796570       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.800879       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 00:42:09.804593       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.807440       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.809617       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.809619       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.810219       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.810273       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.893461       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.893845       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.899025       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.899118       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0813 00:42:09.900976       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0813 00:42:09.900410       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 00:42:09.918825       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-fqm5d"
	I0813 00:42:09.997639       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-tbr4l"
	I0813 00:42:11.004795       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [9c14caeff902e39d3266b8a18e769d2c677e22b928256cfd576ad3e8125aefdf] <==
	* I0813 00:42:08.715395       1 node.go:172] Successfully retrieved node IP: 192.168.94.2
	I0813 00:42:08.715466       1 server_others.go:140] Detected node IP 192.168.94.2
	W0813 00:42:08.715497       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 00:42:09.105453       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 00:42:09.105542       1 server_others.go:212] Using iptables Proxier.
	I0813 00:42:09.105557       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 00:42:09.105590       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 00:42:09.106434       1 server.go:643] Version: v1.21.3
	I0813 00:42:09.107265       1 config.go:315] Starting service config controller
	I0813 00:42:09.107365       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 00:42:09.107461       1 config.go:224] Starting endpoint slice config controller
	I0813 00:42:09.107476       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 00:42:09.195011       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 00:42:09.197411       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 00:42:09.212834       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 00:42:09.213071       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [5c87a3142b07a277f04ffd2c844dab82a7e565d4805d5bd2ed00ec41b8d37d2e] <==
	* I0813 00:41:50.604517       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 00:41:50.604582       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 00:41:50.607503       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 00:41:50.607739       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:50.607946       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 00:41:50.608040       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:50.608120       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:50.608267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 00:41:50.608268       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:50.609112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 00:41:50.609191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:41:50.609411       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 00:41:50.609563       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 00:41:50.609920       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 00:41:50.609996       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 00:41:50.610141       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 00:41:51.490510       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:51.590449       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 00:41:51.615229       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 00:41:51.690771       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:51.725348       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 00:41:51.767535       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 00:41:51.823986       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 00:41:51.971043       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0813 00:41:54.904756       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 00:36:27 UTC, end at Fri 2021-08-13 00:42:32 UTC. --
	Aug 13 00:42:09 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:09.923688    5686 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:10.008915    5686 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:10.107413    5686 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a208d674-9151-445a-8368-919815e63b5a-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-fqm5d\" (UID: \"a208d674-9151-445a-8368-919815e63b5a\") "
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:10.107481    5686 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp5v9\" (UniqueName: \"kubernetes.io/projected/a208d674-9151-445a-8368-919815e63b5a-kube-api-access-dp5v9\") pod \"kubernetes-dashboard-6fcdf4f6d-fqm5d\" (UID: \"a208d674-9151-445a-8368-919815e63b5a\") "
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:10.209688    5686 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/52069898-1ec9-4b87-a7c5-aae9fcf1301a-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-tbr4l\" (UID: \"52069898-1ec9-4b87-a7c5-aae9fcf1301a\") "
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:10.209787    5686 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7s9h\" (UniqueName: \"kubernetes.io/projected/52069898-1ec9-4b87-a7c5-aae9fcf1301a-kube-api-access-t7s9h\") pod \"dashboard-metrics-scraper-8685c45546-tbr4l\" (UID: \"52069898-1ec9-4b87-a7c5-aae9fcf1301a\") "
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:10.618748    5686 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:10.618813    5686 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:10.619005    5686 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-86g22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-d6wcs_kube-system(ca3671c5-bdeb-4af0-8e6c-3e69eddf7645): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host
	Aug 13 00:42:10 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:10.619058    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-d6wcs" podUID=ca3671c5-bdeb-4af0-8e6c-3e69eddf7645
	Aug 13 00:42:11 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:11.420397    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-d6wcs" podUID=ca3671c5-bdeb-4af0-8e6c-3e69eddf7645
	Aug 13 00:42:18 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:18.436356    5686 scope.go:111] "RemoveContainer" containerID="51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8"
	Aug 13 00:42:19 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:19.439219    5686 scope.go:111] "RemoveContainer" containerID="51e431ed38a8c6f1b8ef8d5732d6acc26f37235db7f482c235e23ddc3ff31cb8"
	Aug 13 00:42:19 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:19.439348    5686 scope.go:111] "RemoveContainer" containerID="45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00"
	Aug 13 00:42:19 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:19.439734    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-tbr4l_kubernetes-dashboard(52069898-1ec9-4b87-a7c5-aae9fcf1301a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l" podUID=52069898-1ec9-4b87-a7c5-aae9fcf1301a
	Aug 13 00:42:19 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:19.635767    5686 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/docker/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1\": RecentStats: unable to find data in memory cache]"
	Aug 13 00:42:20 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:20.442689    5686 scope.go:111] "RemoveContainer" containerID="45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00"
	Aug 13 00:42:20 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:20.443066    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-tbr4l_kubernetes-dashboard(52069898-1ec9-4b87-a7c5-aae9fcf1301a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l" podUID=52069898-1ec9-4b87-a7c5-aae9fcf1301a
	Aug 13 00:42:21 embed-certs-20210813003107-676638 kubelet[5686]: I0813 00:42:21.444617    5686 scope.go:111] "RemoveContainer" containerID="45854d5646e93d9160501134d3f2d50d62a88771df74a22cfb6256b564ecde00"
	Aug 13 00:42:21 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:21.445016    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-tbr4l_kubernetes-dashboard(52069898-1ec9-4b87-a7c5-aae9fcf1301a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-tbr4l" podUID=52069898-1ec9-4b87-a7c5-aae9fcf1301a
	Aug 13 00:42:26 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:26.333311    5686 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 00:42:26 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:26.333373    5686 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 13 00:42:26 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:26.333549    5686 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-86g22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handl
er{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]
VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-d6wcs_kube-system(ca3671c5-bdeb-4af0-8e6c-3e69eddf7645): ErrImagePull: rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host
	Aug 13 00:42:26 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:26.333606    5686 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = error pinging docker registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-d6wcs" podUID=ca3671c5-bdeb-4af0-8e6c-3e69eddf7645
	Aug 13 00:42:29 embed-certs-20210813003107-676638 kubelet[5686]: E0813 00:42:29.741608    5686 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1/docker/f6d10369f2ec95f1e489d682755a97d3558b63977214599bb618fdb50aedbea1\": RecentStats: unable to find data in memory cache]"
	
	* 
	* ==> kubernetes-dashboard [040094321d48d6d2b00866b99d09b513dda22a6c18ee7ca6fd42018867148478] <==
	* 2021/08/13 00:42:11 Using namespace: kubernetes-dashboard
	2021/08/13 00:42:11 Using in-cluster config to connect to apiserver
	2021/08/13 00:42:11 Using secret token for csrf signing
	2021/08/13 00:42:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 00:42:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 00:42:11 Successful initial request to the apiserver, version: v1.21.3
	2021/08/13 00:42:11 Generating JWE encryption key
	2021/08/13 00:42:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 00:42:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 00:42:11 Initializing JWE encryption key from synchronized object
	2021/08/13 00:42:11 Creating in-cluster Sidecar client
	2021/08/13 00:42:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 00:42:11 Serving insecurely on HTTP port: 9090
	2021/08/13 00:42:11 Starting overwatch
	
	* 
	* ==> storage-provisioner [65b03c21c2ec0eb764237745134555f63f7405afd3ba24ceb8dfe552ccdb23af] <==
	* I0813 00:42:10.433359       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 00:42:10.493520       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 00:42:10.493671       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 00:42:10.501645       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 00:42:10.501836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813003107-676638_e668cfe8-4b95-45c6-9d37-a325554b1ca5!
	I0813 00:42:10.501814       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f38e255c-929c-4864-8155-18f2a84213bd", APIVersion:"v1", ResourceVersion:"584", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210813003107-676638_e668cfe8-4b95-45c6-9d37-a325554b1ca5 became leader
	I0813 00:42:10.603029       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210813003107-676638_e668cfe8-4b95-45c6-9d37-a325554b1ca5!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210813003107-676638 -n embed-certs-20210813003107-676638
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20210813003107-676638 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-7c784ccb57-d6wcs
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20210813003107-676638 describe pod metrics-server-7c784ccb57-d6wcs
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20210813003107-676638 describe pod metrics-server-7c784ccb57-d6wcs: exit status 1 (70.058964ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-7c784ccb57-d6wcs" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20210813003107-676638 describe pod metrics-server-7c784ccb57-d6wcs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (532.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210813002927-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20210813002927-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=crio: exit status 80 (8m52.349623541s)

                                                
                                                
-- stdout --
	* [calico-20210813002927-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node calico-20210813002927-676638 in cluster calico-20210813002927-676638
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:43:34.903756  956294 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:43:34.903856  956294 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:43:34.903866  956294 out.go:311] Setting ErrFile to fd 2...
	I0813 00:43:34.903872  956294 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:43:34.904028  956294 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:43:34.904332  956294 out.go:305] Setting JSON to false
	I0813 00:43:34.953611  956294 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":15976,"bootTime":1628799438,"procs":314,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:43:34.953753  956294 start.go:121] virtualization: kvm guest
	I0813 00:43:34.956883  956294 out.go:177] * [calico-20210813002927-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:43:34.958602  956294 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:43:34.957058  956294 notify.go:169] Checking for updates...
	I0813 00:43:34.960233  956294 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:43:34.961858  956294 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:43:34.963374  956294 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:43:34.964126  956294 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:43:35.022077  956294 docker.go:132] docker version: linux-19.03.15
	I0813 00:43:35.022220  956294 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:43:35.119602  956294 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-13 00:43:35.066649568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:43:35.119695  956294 docker.go:244] overlay module found
	I0813 00:43:35.122025  956294 out.go:177] * Using the docker driver based on user configuration
	I0813 00:43:35.122061  956294 start.go:278] selected driver: docker
	I0813 00:43:35.122070  956294 start.go:751] validating driver "docker" against <nil>
	I0813 00:43:35.122097  956294 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 00:43:35.122161  956294 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:43:35.122184  956294 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 00:43:35.123999  956294 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:43:35.125178  956294 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:43:35.229958  956294 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:77 SystemTime:2021-08-13 00:43:35.173211834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:43:35.230142  956294 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 00:43:35.230373  956294 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 00:43:35.230405  956294 cni.go:93] Creating CNI manager for "calico"
	I0813 00:43:35.230414  956294 start_flags.go:272] Found "Calico" CNI - setting NetworkPlugin=cni
	I0813 00:43:35.230430  956294 start_flags.go:277] config:
	{Name:calico-20210813002927-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210813002927-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:43:35.232814  956294 out.go:177] * Starting control plane node calico-20210813002927-676638 in cluster calico-20210813002927-676638
	I0813 00:43:35.232869  956294 cache.go:117] Beginning downloading kic base image for docker with crio
	I0813 00:43:35.234458  956294 out.go:177] * Pulling base image ...
	I0813 00:43:35.234493  956294 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:43:35.234535  956294 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0813 00:43:35.234559  956294 cache.go:56] Caching tarball of preloaded images
	I0813 00:43:35.234591  956294 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 00:43:35.234767  956294 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 00:43:35.234786  956294 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on crio
	I0813 00:43:35.234942  956294 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/config.json ...
	I0813 00:43:35.234968  956294 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/config.json: {Name:mk407f7b56fa260c6ff7ef1cb855fbaa762bbe8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:43:35.343840  956294 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 00:43:35.343879  956294 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 00:43:35.343898  956294 cache.go:205] Successfully downloaded all kic artifacts
	I0813 00:43:35.343937  956294 start.go:313] acquiring machines lock for calico-20210813002927-676638: {Name:mk72ebf3b3cf8064482d565550da62282425fed2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 00:43:35.344121  956294 start.go:317] acquired machines lock for "calico-20210813002927-676638" in 157.276µs
	I0813 00:43:35.344159  956294 start.go:89] Provisioning new machine with config: &{Name:calico-20210813002927-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210813002927-676638 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:43:35.344276  956294 start.go:126] createHost starting for "" (driver="docker")
	I0813 00:43:35.347193  956294 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 00:43:35.347520  956294 start.go:160] libmachine.API.Create for "calico-20210813002927-676638" (driver="docker")
	I0813 00:43:35.347553  956294 client.go:168] LocalClient.Create starting
	I0813 00:43:35.347655  956294 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 00:43:35.347724  956294 main.go:130] libmachine: Decoding PEM data...
	I0813 00:43:35.347750  956294 main.go:130] libmachine: Parsing certificate...
	I0813 00:43:35.347890  956294 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 00:43:35.347912  956294 main.go:130] libmachine: Decoding PEM data...
	I0813 00:43:35.347925  956294 main.go:130] libmachine: Parsing certificate...
	I0813 00:43:35.348347  956294 cli_runner.go:115] Run: docker network inspect calico-20210813002927-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 00:43:35.391667  956294 cli_runner.go:162] docker network inspect calico-20210813002927-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 00:43:35.391778  956294 network_create.go:255] running [docker network inspect calico-20210813002927-676638] to gather additional debugging logs...
	I0813 00:43:35.391813  956294 cli_runner.go:115] Run: docker network inspect calico-20210813002927-676638
	W0813 00:43:35.444528  956294 cli_runner.go:162] docker network inspect calico-20210813002927-676638 returned with exit code 1
	I0813 00:43:35.444572  956294 network_create.go:258] error running [docker network inspect calico-20210813002927-676638]: docker network inspect calico-20210813002927-676638: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20210813002927-676638
	I0813 00:43:35.444609  956294 network_create.go:260] output of [docker network inspect calico-20210813002927-676638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20210813002927-676638
	
	** /stderr **
	I0813 00:43:35.444701  956294 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 00:43:35.487664  956294 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000542078] misses:0}
	I0813 00:43:35.487714  956294 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 00:43:35.488296  956294 network_create.go:106] attempt to create docker network calico-20210813002927-676638 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 00:43:35.488397  956294 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210813002927-676638
	I0813 00:43:35.576128  956294 network_create.go:90] docker network calico-20210813002927-676638 192.168.49.0/24 created
	I0813 00:43:35.576168  956294 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20210813002927-676638" container
	I0813 00:43:35.576250  956294 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 00:43:35.625511  956294 cli_runner.go:115] Run: docker volume create calico-20210813002927-676638 --label name.minikube.sigs.k8s.io=calico-20210813002927-676638 --label created_by.minikube.sigs.k8s.io=true
	I0813 00:43:35.676554  956294 oci.go:102] Successfully created a docker volume calico-20210813002927-676638
	I0813 00:43:35.676647  956294 cli_runner.go:115] Run: docker run --rm --name calico-20210813002927-676638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210813002927-676638 --entrypoint /usr/bin/test -v calico-20210813002927-676638:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 00:43:36.521477  956294 oci.go:106] Successfully prepared a docker volume calico-20210813002927-676638
	W0813 00:43:36.521549  956294 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 00:43:36.521564  956294 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 00:43:36.521572  956294 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:43:36.521609  956294 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 00:43:36.521627  956294 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 00:43:36.521683  956294 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210813002927-676638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 00:43:36.619500  956294 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210813002927-676638 --name calico-20210813002927-676638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210813002927-676638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210813002927-676638 --network calico-20210813002927-676638 --ip 192.168.49.2 --volume calico-20210813002927-676638:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 00:43:37.251593  956294 cli_runner.go:115] Run: docker container inspect calico-20210813002927-676638 --format={{.State.Running}}
	I0813 00:43:37.314183  956294 cli_runner.go:115] Run: docker container inspect calico-20210813002927-676638 --format={{.State.Status}}
	I0813 00:43:37.368341  956294 cli_runner.go:115] Run: docker exec calico-20210813002927-676638 stat /var/lib/dpkg/alternatives/iptables
	I0813 00:43:37.522958  956294 oci.go:278] the created container "calico-20210813002927-676638" has a running status.
	I0813 00:43:37.522997  956294 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002927-676638/id_rsa...
	I0813 00:43:37.601359  956294 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002927-676638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 00:43:38.016076  956294 cli_runner.go:115] Run: docker container inspect calico-20210813002927-676638 --format={{.State.Status}}
	I0813 00:43:38.072238  956294 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 00:43:38.072260  956294 kic_runner.go:115] Args: [docker exec --privileged calico-20210813002927-676638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 00:43:40.997564  956294 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210813002927-676638:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.475829341s)
	I0813 00:43:40.997608  956294 kic.go:188] duration metric: took 4.475997 seconds to extract preloaded images to volume
	I0813 00:43:40.997705  956294 cli_runner.go:115] Run: docker container inspect calico-20210813002927-676638 --format={{.State.Status}}
	I0813 00:43:41.048715  956294 machine.go:88] provisioning docker machine ...
	I0813 00:43:41.048767  956294 ubuntu.go:169] provisioning hostname "calico-20210813002927-676638"
	I0813 00:43:41.048863  956294 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813002927-676638
	I0813 00:43:41.097905  956294 main.go:130] libmachine: Using SSH client type: native
	I0813 00:43:41.098196  956294 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33472 <nil> <nil>}
	I0813 00:43:41.098222  956294 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20210813002927-676638 && echo "calico-20210813002927-676638" | sudo tee /etc/hostname
	I0813 00:43:41.285824  956294 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20210813002927-676638
	
	I0813 00:43:41.285900  956294 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813002927-676638
	I0813 00:43:41.336352  956294 main.go:130] libmachine: Using SSH client type: native
	I0813 00:43:41.336530  956294 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33472 <nil> <nil>}
	I0813 00:43:41.336549  956294 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20210813002927-676638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210813002927-676638/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20210813002927-676638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 00:43:41.457614  956294 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 00:43:41.457652  956294 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 00:43:41.457674  956294 ubuntu.go:177] setting up certificates
	I0813 00:43:41.457686  956294 provision.go:83] configureAuth start
	I0813 00:43:41.457744  956294 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210813002927-676638
	I0813 00:43:41.502842  956294 provision.go:137] copyHostCerts
	I0813 00:43:41.502914  956294 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 00:43:41.502925  956294 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 00:43:41.503002  956294 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 00:43:41.503100  956294 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 00:43:41.503115  956294 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 00:43:41.503144  956294 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1675 bytes)
	I0813 00:43:41.503218  956294 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 00:43:41.503229  956294 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 00:43:41.503258  956294 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1082 bytes)
	I0813 00:43:41.503310  956294 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.calico-20210813002927-676638 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20210813002927-676638]
	I0813 00:43:41.612347  956294 provision.go:171] copyRemoteCerts
	I0813 00:43:41.612428  956294 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 00:43:41.612476  956294 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813002927-676638
	I0813 00:43:41.660323  956294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002927-676638/id_rsa Username:docker}
	I0813 00:43:41.745653  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 00:43:41.764837  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0813 00:43:41.787892  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 00:43:41.812412  956294 provision.go:86] duration metric: configureAuth took 354.710492ms
	I0813 00:43:41.812442  956294 ubuntu.go:193] setting minikube options for container-runtime
	I0813 00:43:41.812836  956294 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813002927-676638
	I0813 00:43:41.859362  956294 main.go:130] libmachine: Using SSH client type: native
	I0813 00:43:41.859535  956294 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802ea0] 0x802e60 <nil>  [] 0s} 127.0.0.1 33472 <nil> <nil>}
	I0813 00:43:41.859554  956294 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 00:43:42.287687  956294 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 00:43:42.287720  956294 machine.go:91] provisioned docker machine in 1.238975049s
	I0813 00:43:42.287733  956294 client.go:171] LocalClient.Create took 6.940173356s
	I0813 00:43:42.287755  956294 start.go:168] duration metric: libmachine.API.Create for "calico-20210813002927-676638" took 6.940235426s
	I0813 00:43:42.287766  956294 start.go:267] post-start starting for "calico-20210813002927-676638" (driver="docker")
	I0813 00:43:42.287773  956294 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 00:43:42.287851  956294 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 00:43:42.287922  956294 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813002927-676638
	I0813 00:43:42.341359  956294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002927-676638/id_rsa Username:docker}
	I0813 00:43:42.425546  956294 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 00:43:42.428495  956294 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 00:43:42.428526  956294 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 00:43:42.428536  956294 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 00:43:42.428545  956294 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 00:43:42.428558  956294 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 00:43:42.428624  956294 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 00:43:42.428746  956294 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem -> 6766382.pem in /etc/ssl/certs
	I0813 00:43:42.428888  956294 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 00:43:42.436846  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:43:42.454507  956294 start.go:270] post-start completed in 166.724596ms
	I0813 00:43:42.454876  956294 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210813002927-676638
	I0813 00:43:42.498537  956294 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/config.json ...
	I0813 00:43:42.498766  956294 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:43:42.498810  956294 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813002927-676638
	I0813 00:43:42.541606  956294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002927-676638/id_rsa Username:docker}
	I0813 00:43:42.622063  956294 start.go:129] duration metric: createHost completed in 7.277769967s
	I0813 00:43:42.622106  956294 start.go:80] releasing machines lock for "calico-20210813002927-676638", held for 7.277967438s
	I0813 00:43:42.622211  956294 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210813002927-676638
	I0813 00:43:42.663909  956294 ssh_runner.go:149] Run: systemctl --version
	I0813 00:43:42.663960  956294 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813002927-676638
	I0813 00:43:42.663972  956294 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 00:43:42.664040  956294 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813002927-676638
	I0813 00:43:42.708213  956294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002927-676638/id_rsa Username:docker}
	I0813 00:43:42.712004  956294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002927-676638/id_rsa Username:docker}
	I0813 00:43:42.789628  956294 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
	I0813 00:43:42.826188  956294 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0813 00:43:42.836370  956294 docker.go:153] disabling docker service ...
	I0813 00:43:42.836433  956294 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 00:43:42.846697  956294 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 00:43:42.856693  956294 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 00:43:42.930374  956294 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 00:43:43.005521  956294 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 00:43:43.015804  956294 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	image-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 00:43:43.029621  956294 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.4.1"|' -i /etc/crio/crio.conf"
	I0813 00:43:43.038003  956294 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 00:43:43.044704  956294 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 00:43:43.044784  956294 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0813 00:43:43.052486  956294 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 00:43:43.059345  956294 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 00:43:43.121162  956294 ssh_runner.go:149] Run: sudo systemctl start crio
	I0813 00:43:43.131960  956294 start.go:392] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 00:43:43.132047  956294 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
	I0813 00:43:43.135829  956294 start.go:417] Will wait 60s for crictl version
	I0813 00:43:43.135893  956294 ssh_runner.go:149] Run: sudo crictl version
	I0813 00:43:43.166508  956294 start.go:426] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.20.3
	RuntimeApiVersion:  v1alpha1
	I0813 00:43:43.166597  956294 ssh_runner.go:149] Run: crio --version
	I0813 00:43:43.234493  956294 ssh_runner.go:149] Run: crio --version
	I0813 00:43:43.308711  956294 out.go:177] * Preparing Kubernetes v1.21.3 on CRI-O 1.20.3 ...
	I0813 00:43:43.308831  956294 cli_runner.go:115] Run: docker network inspect calico-20210813002927-676638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 00:43:43.367947  956294 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 00:43:43.371602  956294 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:43:43.382328  956294 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0813 00:43:43.382403  956294 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:43:43.436357  956294 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:43:43.436380  956294 crio.go:333] Images already preloaded, skipping extraction
	I0813 00:43:43.436430  956294 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 00:43:43.463690  956294 crio.go:424] all images are preloaded for cri-o runtime.
	I0813 00:43:43.463717  956294 cache_images.go:74] Images are preloaded, skipping loading
	I0813 00:43:43.463787  956294 ssh_runner.go:149] Run: crio config
	I0813 00:43:43.540861  956294 cni.go:93] Creating CNI manager for "calico"
	I0813 00:43:43.540886  956294 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 00:43:43.540902  956294 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210813002927-676638 NodeName:calico-20210813002927-676638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 00:43:43.541084  956294 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "calico-20210813002927-676638"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 00:43:43.541199  956294 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=calico-20210813002927-676638 --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:calico-20210813002927-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0813 00:43:43.541296  956294 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 00:43:43.549434  956294 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 00:43:43.549509  956294 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 00:43:43.557250  956294 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0813 00:43:43.571362  956294 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 00:43:43.585384  956294 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0813 00:43:43.599112  956294 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 00:43:43.602395  956294 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 00:43:43.611937  956294 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638 for IP: 192.168.49.2
	I0813 00:43:43.611987  956294 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 00:43:43.612004  956294 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 00:43:43.612062  956294 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/client.key
	I0813 00:43:43.612072  956294 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/client.crt with IP's: []
	I0813 00:43:43.672273  956294 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/client.crt ...
	I0813 00:43:43.672312  956294 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/client.crt: {Name:mk4a356d6f1642752493008db385f7437dcea961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:43:43.672525  956294 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/client.key ...
	I0813 00:43:43.672539  956294 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/client.key: {Name:mke693a39fad67439eefdc398a9d2a95a69cc01e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:43:43.672635  956294 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.key.dd3b5fb2
	I0813 00:43:43.672652  956294 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 00:43:43.816679  956294 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.crt.dd3b5fb2 ...
	I0813 00:43:43.816718  956294 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.crt.dd3b5fb2: {Name:mkd6f9dbd226a0192687633e36c03862f88ccbd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:43:43.816951  956294 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.key.dd3b5fb2 ...
	I0813 00:43:43.816973  956294 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.key.dd3b5fb2: {Name:mk2a17cb65bb73fb759fbf8d275c3d7cd6e36b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:43:43.817122  956294 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.crt
	I0813 00:43:43.817207  956294 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.key
	I0813 00:43:43.817384  956294 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/proxy-client.key
	I0813 00:43:43.817404  956294 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/proxy-client.crt with IP's: []
	I0813 00:43:44.035370  956294 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/proxy-client.crt ...
	I0813 00:43:44.035402  956294 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/proxy-client.crt: {Name:mk3327dc6a1e1dfd0a846e3fafd9ead4a9f5b813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:43:44.035603  956294 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/proxy-client.key ...
	I0813 00:43:44.035617  956294 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/proxy-client.key: {Name:mk040fcf6dc505bd6c4f7c8992764fa2f87e88b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:43:44.035790  956294 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem (1338 bytes)
	W0813 00:43:44.035830  956294 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638_empty.pem, impossibly tiny 0 bytes
	I0813 00:43:44.035841  956294 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 00:43:44.035864  956294 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1082 bytes)
	I0813 00:43:44.035891  956294 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 00:43:44.035915  956294 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1675 bytes)
	I0813 00:43:44.035959  956294 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem (1708 bytes)
	I0813 00:43:44.036970  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 00:43:44.090595  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 00:43:44.110172  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 00:43:44.132210  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813002927-676638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 00:43:44.152599  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 00:43:44.174776  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0813 00:43:44.193582  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 00:43:44.215218  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0813 00:43:44.234376  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/676638.pem --> /usr/share/ca-certificates/676638.pem (1338 bytes)
	I0813 00:43:44.254223  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/6766382.pem --> /usr/share/ca-certificates/6766382.pem (1708 bytes)
	I0813 00:43:44.274332  956294 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 00:43:44.293414  956294 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 00:43:44.308360  956294 ssh_runner.go:149] Run: openssl version
	I0813 00:43:44.314180  956294 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/676638.pem && ln -fs /usr/share/ca-certificates/676638.pem /etc/ssl/certs/676638.pem"
	I0813 00:43:44.323204  956294 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/676638.pem
	I0813 00:43:44.328153  956294 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 00:05 /usr/share/ca-certificates/676638.pem
	I0813 00:43:44.328223  956294 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/676638.pem
	I0813 00:43:44.335254  956294 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/676638.pem /etc/ssl/certs/51391683.0"
	I0813 00:43:44.350833  956294 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6766382.pem && ln -fs /usr/share/ca-certificates/6766382.pem /etc/ssl/certs/6766382.pem"
	I0813 00:43:44.360432  956294 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6766382.pem
	I0813 00:43:44.364086  956294 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 00:05 /usr/share/ca-certificates/6766382.pem
	I0813 00:43:44.364145  956294 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6766382.pem
	I0813 00:43:44.370393  956294 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6766382.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 00:43:44.379430  956294 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 00:43:44.387592  956294 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:43:44.390799  956294 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 12 23:55 /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:43:44.390854  956294 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 00:43:44.396781  956294 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 00:43:44.405120  956294 kubeadm.go:390] StartCluster: {Name:calico-20210813002927-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210813002927-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:43:44.405211  956294 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 00:43:44.405278  956294 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 00:43:44.432396  956294 cri.go:76] found id: ""
	I0813 00:43:44.432477  956294 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 00:43:44.440161  956294 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 00:43:44.448004  956294 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 00:43:44.448058  956294 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 00:43:44.458278  956294 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 00:43:44.458336  956294 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 00:44:12.332417  956294 out.go:204]   - Generating certificates and keys ...
	I0813 00:44:12.335601  956294 out.go:204]   - Booting up control plane ...
	I0813 00:44:12.338429  956294 out.go:204]   - Configuring RBAC rules ...
	I0813 00:44:12.340919  956294 cni.go:93] Creating CNI manager for "calico"
	I0813 00:44:12.343158  956294 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0813 00:44:12.343544  956294 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 00:44:12.343564  956294 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (202053 bytes)
	I0813 00:44:12.357630  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 00:44:13.517591  956294 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.159915836s)
	I0813 00:44:13.517653  956294 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 00:44:13.517731  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:13.517733  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=calico-20210813002927-676638 minikube.k8s.io/updated_at=2021_08_13T00_44_13_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:13.904216  956294 ops.go:34] apiserver oom_adj: -16
	I0813 00:44:13.904365  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:14.596619  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:15.096404  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:15.596688  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:16.096658  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:16.596719  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:17.096530  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:17.596403  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:18.096184  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:18.596126  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:19.096821  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:19.596150  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:20.096634  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:20.596656  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:21.096914  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:21.596639  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:22.096468  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:22.596831  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:23.096839  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:23.596251  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:24.096355  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:24.596274  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:25.096241  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:25.596115  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:26.096194  956294 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 00:44:26.412469  956294 kubeadm.go:985] duration metric: took 12.89479655s to wait for elevateKubeSystemPrivileges.
	I0813 00:44:26.412508  956294 kubeadm.go:392] StartCluster complete in 42.007398544s
	I0813 00:44:26.412533  956294 settings.go:142] acquiring lock: {Name:mk8e048b414f35bb1583f1d1b3e929d90c1bd9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:44:26.412643  956294 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:44:26.415094  956294 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk7dda383efa2f679c68affe6e459fff93248137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 00:44:26.937411  956294 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20210813002927-676638" rescaled to 1
	I0813 00:44:26.937483  956294 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 00:44:26.937516  956294 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 00:44:26.939693  956294 out.go:177] * Verifying Kubernetes components...
	I0813 00:44:26.939768  956294 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:44:26.937626  956294 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 00:44:26.939851  956294 addons.go:59] Setting storage-provisioner=true in profile "calico-20210813002927-676638"
	I0813 00:44:26.939870  956294 addons.go:135] Setting addon storage-provisioner=true in "calico-20210813002927-676638"
	W0813 00:44:26.939877  956294 addons.go:147] addon storage-provisioner should already be in state true
	I0813 00:44:26.939913  956294 host.go:66] Checking if "calico-20210813002927-676638" exists ...
	I0813 00:44:26.939923  956294 addons.go:59] Setting default-storageclass=true in profile "calico-20210813002927-676638"
	I0813 00:44:26.939951  956294 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20210813002927-676638"
	I0813 00:44:26.940366  956294 cli_runner.go:115] Run: docker container inspect calico-20210813002927-676638 --format={{.State.Status}}
	I0813 00:44:26.940521  956294 cli_runner.go:115] Run: docker container inspect calico-20210813002927-676638 --format={{.State.Status}}
	I0813 00:44:27.006262  956294 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 00:44:27.006489  956294 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:44:27.006511  956294 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 00:44:27.006582  956294 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813002927-676638
	I0813 00:44:27.028970  956294 addons.go:135] Setting addon default-storageclass=true in "calico-20210813002927-676638"
	W0813 00:44:27.029007  956294 addons.go:147] addon default-storageclass should already be in state true
	I0813 00:44:27.029043  956294 host.go:66] Checking if "calico-20210813002927-676638" exists ...
	I0813 00:44:27.029661  956294 cli_runner.go:115] Run: docker container inspect calico-20210813002927-676638 --format={{.State.Status}}
	I0813 00:44:27.064604  956294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002927-676638/id_rsa Username:docker}
	I0813 00:44:27.087288  956294 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 00:44:27.087318  956294 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 00:44:27.087390  956294 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813002927-676638
	I0813 00:44:27.120458  956294 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 00:44:27.122913  956294 node_ready.go:35] waiting up to 5m0s for node "calico-20210813002927-676638" to be "Ready" ...
	I0813 00:44:27.126932  956294 node_ready.go:49] node "calico-20210813002927-676638" has status "Ready":"True"
	I0813 00:44:27.126953  956294 node_ready.go:38] duration metric: took 4.0091ms waiting for node "calico-20210813002927-676638" to be "Ready" ...
	I0813 00:44:27.126967  956294 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:44:27.141249  956294 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace to be "Ready" ...
	I0813 00:44:27.168023  956294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813002927-676638/id_rsa Username:docker}
	I0813 00:44:27.237746  956294 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 00:44:27.410039  956294 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 00:44:28.099781  956294 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 00:44:28.304452  956294 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.066660835s)
	I0813 00:44:28.339612  956294 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 00:44:28.339644  956294 addons.go:344] enableAddons completed in 1.402031517s
	I0813 00:44:29.160975  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 00:44:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0813 00:44:31.161352  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:33.161867  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:35.661597  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:37.661818  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:39.662772  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:42.162457  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:44.661488  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:46.663873  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:49.160989  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:51.161534  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:53.662271  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:56.160838  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:44:58.161605  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:00.162432  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:02.661880  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:05.161491  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:11.571120  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:13.660723  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:15.661455  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:18.161572  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:20.661820  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:23.161210  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:25.161441  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:27.661181  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:29.662281  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:32.160980  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:34.161909  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:36.661513  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:38.662047  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:40.668126  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:43.160617  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:45.161761  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:47.661738  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:49.662122  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:52.161761  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:54.661541  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:57.161508  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:45:59.661494  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:01.663484  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:04.161157  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:06.662059  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:08.662335  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:11.162391  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:13.163432  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:15.660890  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:17.661011  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:19.661185  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:22.162235  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:24.661699  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:27.161897  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:29.661512  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:31.662094  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:38.648459  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:40.661535  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:42.662151  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:45.161076  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:47.161337  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:49.162138  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:51.163759  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:53.660713  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:55.661162  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:46:58.160718  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:00.161163  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:02.661501  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:05.160545  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:07.162000  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:09.661163  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:12.160858  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:14.162039  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:16.660633  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:18.660997  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:21.161849  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:23.162004  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:25.662158  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:28.160828  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:30.161376  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:32.661845  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:35.161035  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:37.662000  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:40.160732  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:42.161046  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:44.162019  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:46.162552  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:48.661364  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:51.161504  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:53.163668  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:55.661898  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:47:58.162031  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:00.660636  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:02.661500  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:05.162582  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:07.165438  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:09.660979  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:12.161076  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:14.161326  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:16.660763  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:18.661787  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:21.161658  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:23.661544  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:26.161746  956294 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:27.167305  956294 pod_ready.go:81] duration metric: took 4m0.026012144s waiting for pod "calico-kube-controllers-85ff9ff759-qxf8j" in "kube-system" namespace to be "Ready" ...
	E0813 00:48:27.167332  956294 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0813 00:48:27.167344  956294 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-m5pzz" in "kube-system" namespace to be "Ready" ...
	I0813 00:48:29.179041  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:31.179854  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:33.679288  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:35.680774  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:38.179563  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:40.179758  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:42.679142  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:45.179548  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:47.679387  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:50.180061  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:52.679666  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:55.179670  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:57.179740  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:48:59.179885  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:01.679446  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:03.679744  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:06.179885  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:08.180585  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:10.679806  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:13.179509  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:15.679974  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:18.179087  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:20.179359  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:22.680031  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:25.179353  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:27.678725  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:29.680332  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:32.179433  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:34.679134  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:36.679822  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:39.179283  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:41.679212  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:43.680183  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:46.178778  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:48.180006  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:50.678625  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:52.679082  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:55.180146  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:57.679707  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:49:59.679949  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:02.179420  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:04.179931  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:06.680007  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:09.179537  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:11.179767  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:13.679927  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:16.179173  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:18.678931  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:20.679138  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:22.679577  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:24.680227  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:27.179191  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:29.179473  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:31.180167  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:33.680423  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:36.179609  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:38.179794  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:40.678861  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:42.679585  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:45.179130  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:47.179734  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:49.179902  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:51.679518  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:54.179071  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:56.179191  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:50:58.179807  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:00.679606  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:03.179714  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:05.679719  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:08.179197  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:10.179606  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:12.179895  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:14.180097  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:16.679451  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:19.179344  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:21.179611  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:23.681832  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:26.179302  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:28.679199  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:30.679756  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:33.180018  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:35.680179  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:38.179552  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:40.179696  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:42.679433  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:45.179247  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:47.679874  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:50.179679  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:52.679180  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:55.179651  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:57.180200  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:51:59.679638  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:02.179312  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:04.179704  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:06.679495  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:09.179454  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:11.679062  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:13.682398  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:16.179077  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:18.179296  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:20.179600  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:22.179794  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:24.679890  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:27.179202  956294 pod_ready.go:102] pod "calico-node-m5pzz" in "kube-system" namespace has status "Ready":"False"
	I0813 00:52:27.183504  956294 pod_ready.go:81] duration metric: took 4m0.016147739s waiting for pod "calico-node-m5pzz" in "kube-system" namespace to be "Ready" ...
	E0813 00:52:27.183527  956294 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0813 00:52:27.183549  956294 pod_ready.go:38] duration metric: took 8m0.056567951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 00:52:27.186074  956294 out.go:177] 
	W0813 00:52:27.186253  956294 out.go:242] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0813 00:52:27.186268  956294 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 00:52:27.188142  956294 out.go:242] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                           │
	│                                                                                                                                                         │
	│    * Please attach the following file to the GitHub issue:                                                                                              │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 00:52:27.189750  956294 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (532.37s)
E0813 00:53:09.411138  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/bridge-20210813002925-676638/client.crt: no such file or directory
E0813 00:53:10.030824  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:53:21.598082  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:53:21.911714  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:53:23.941061  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210813002926-676638/client.crt: no such file or directory
E0813 00:53:49.593780  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:54:12.101681  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:54:31.332013  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/bridge-20210813002925-676638/client.crt: no such file or directory
E0813 00:54:39.787419  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:55:23.026260  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:23.031606  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:23.041931  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:23.062285  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:23.102696  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:23.183092  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:23.343564  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:23.664206  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:24.305210  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:25.585738  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:28.145928  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:33.266808  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:55:40.096783  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210813002926-676638/client.crt: no such file or directory
E0813 00:55:43.507365  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:56:03.988529  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:56:07.782088  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210813002926-676638/client.crt: no such file or directory
E0813 00:56:44.949943  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:56:47.486491  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/bridge-20210813002925-676638/client.crt: no such file or directory
E0813 00:56:56.455640  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
E0813 00:57:01.852767  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813003017-676638/client.crt: no such file or directory
E0813 00:57:15.172825  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/bridge-20210813002925-676638/client.crt: no such file or directory
E0813 00:57:25.139867  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:57:53.081068  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:58:06.870386  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 00:58:10.031931  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:58:21.598331  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:58:21.911117  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:58:48.184539  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:59:12.101795  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 01:00:23.026624  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/enable-default-cni-20210813002925-676638/client.crt: no such file or directory
E0813 01:00:40.097045  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/kindnet-20210813002926-676638/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (290.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (71.903584ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (71.574049ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (71.21597ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (78.934695ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (69.818093ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (73.973451ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (92.836959ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (88.169587ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0813 00:46:24.648693  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (70.376934ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (70.267754ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (73.384676ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0813 00:47:25.138683  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:47:37.417414  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
E0813 00:47:42.814476  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813003017-676638/client.crt: no such file or directory
E0813 00:47:52.824098  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:48:10.031698  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (70.039825ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0813 00:48:18.377767  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
E0813 00:48:21.598513  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:48:21.911122  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:21.916418  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:21.926693  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:21.947029  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:21.987316  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:22.067684  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:22.228140  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:22.548710  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:23.189628  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:23.774941  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813003017-676638/client.crt: no such file or directory
E0813 00:48:24.469769  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:27.030047  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:32.150850  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:48:42.391128  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:02.872305  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:12.102295  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:12.107627  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:12.117918  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:12.138207  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:12.178546  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:12.258948  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:12.419407  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:12.740044  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:13.381011  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:14.661444  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:17.222366  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:22.343083  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (72.231991ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0813 00:49:32.584132  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:40.298928  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
E0813 00:49:43.832477  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813002927-676638/client.crt: no such file or directory
E0813 00:49:45.695267  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813003017-676638/client.crt: no such file or directory
E0813 00:49:53.064629  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813002927-676638/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (73.505613ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:168: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:173: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (290.87s)

                                                
                                    

Test pass (216/250)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 5.05
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.07
10 TestDownloadOnly/v1.21.3/json-events 7.77
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.07
17 TestDownloadOnly/v1.22.0-rc.0/json-events 13.93
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 3.04
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.4
25 TestDownloadOnlyKic 10.44
26 TestOffline 115.03
29 TestAddons/parallel/Registry 23.49
31 TestAddons/parallel/MetricsServer 6
32 TestAddons/parallel/HelmTiller 10.6
33 TestAddons/parallel/Olm 71.69
34 TestAddons/parallel/CSI 66.42
35 TestAddons/parallel/GCPAuth 48.74
36 TestCertOptions 36.93
38 TestForceSystemdFlag 37.14
39 TestForceSystemdEnv 44.07
40 TestKVMDriverInstallOrUpdate 1.87
44 TestErrorSpam/setup 28.59
45 TestErrorSpam/start 1
46 TestErrorSpam/status 1
47 TestErrorSpam/pause 2.49
48 TestErrorSpam/unpause 6.31
49 TestErrorSpam/stop 1.67
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 98.73
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 5.51
56 TestFunctional/serial/KubeContext 0.05
57 TestFunctional/serial/KubectlGetPods 0.24
60 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
61 TestFunctional/serial/CacheCmd/cache/add_local 1.07
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
63 TestFunctional/serial/CacheCmd/cache/list 0.06
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
65 TestFunctional/serial/CacheCmd/cache/cache_reload 1.99
66 TestFunctional/serial/CacheCmd/cache/delete 0.12
67 TestFunctional/serial/MinikubeKubectlCmd 0.12
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
69 TestFunctional/serial/ExtraConfig 70.32
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1.19
72 TestFunctional/serial/LogsFileCmd 1.21
74 TestFunctional/parallel/ConfigCmd 0.48
75 TestFunctional/parallel/DashboardCmd 6.1
76 TestFunctional/parallel/DryRun 0.77
77 TestFunctional/parallel/InternationalLanguage 0.32
78 TestFunctional/parallel/StatusCmd 1.36
81 TestFunctional/parallel/ServiceCmd 15.91
82 TestFunctional/parallel/AddonsCmd 0.19
83 TestFunctional/parallel/PersistentVolumeClaim 30.79
85 TestFunctional/parallel/SSHCmd 0.92
86 TestFunctional/parallel/CpCmd 0.97
87 TestFunctional/parallel/MySQL 27
88 TestFunctional/parallel/FileSync 0.36
89 TestFunctional/parallel/CertSync 1.78
93 TestFunctional/parallel/NodeLabels 0.07
94 TestFunctional/parallel/LoadImage 1.57
95 TestFunctional/parallel/RemoveImage 3.93
96 TestFunctional/parallel/LoadImageFromFile 1.26
97 TestFunctional/parallel/BuildImage 3.49
98 TestFunctional/parallel/ListImages 0.75
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
101 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
102 TestFunctional/parallel/MountCmd/any-port 9.21
103 TestFunctional/parallel/ProfileCmd/profile_list 0.42
104 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
105 TestFunctional/parallel/MountCmd/specific-port 1.91
107 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
109 TestFunctional/parallel/Version/short 0.07
110 TestFunctional/parallel/Version/components 0.55
111 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
112 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
116 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
120 TestFunctional/delete_busybox_image 0.1
121 TestFunctional/delete_my-image_image 0.04
122 TestFunctional/delete_minikube_cached_images 0.04
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.55
147 TestKicCustomNetwork/create_custom_network 31.6
148 TestKicCustomNetwork/use_default_bridge_network 26.15
149 TestKicExistingNetwork 26.8
150 TestMainNoArgs 0.06
153 TestMultiNode/serial/FreshStart2Nodes 120.26
154 TestMultiNode/serial/DeployApp2Nodes 32.57
156 TestMultiNode/serial/AddNode 26.69
157 TestMultiNode/serial/ProfileList 0.31
158 TestMultiNode/serial/CopyFile 2.56
159 TestMultiNode/serial/StopNode 2.63
160 TestMultiNode/serial/StartAfterStop 31.91
161 TestMultiNode/serial/RestartKeepsNodes 164.07
162 TestMultiNode/serial/DeleteNode 5.74
163 TestMultiNode/serial/StopMultiNode 41.61
164 TestMultiNode/serial/RestartMultiNode 69.87
165 TestMultiNode/serial/ValidateNameConflict 36.12
171 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
172 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 11.65
174 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
175 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 10.62
177 TestDebPackageInstall/install_amd64_debian:10/minikube 0
178 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 10.62
180 TestDebPackageInstall/install_amd64_debian:9/minikube 0
181 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.46
183 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
184 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 15.36
186 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
187 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 14.35
189 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
190 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 15.61
192 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
193 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 13.54
199 TestInsufficientStorage 13.72
202 TestKubernetesUpgrade 137.33
203 TestMissingContainerUpgrade 164.21
212 TestPause/serial/Start 104.8
220 TestNetworkPlugins/group/false 0.84
225 TestStartStop/group/old-k8s-version/serial/FirstStart 103.63
227 TestStartStop/group/no-preload/serial/FirstStart 137.06
228 TestPause/serial/SecondStartNoReconfiguration 11.39
229 TestPause/serial/Pause 0.72
230 TestPause/serial/VerifyStatus 0.35
231 TestPause/serial/Unpause 0.71
232 TestPause/serial/PauseAgain 5.75
233 TestPause/serial/DeletePaused 4.44
235 TestStartStop/group/embed-certs/serial/FirstStart 287.95
236 TestPause/serial/VerifyDeletedResources 0.72
238 TestStartStop/group/default-k8s-different-port/serial/FirstStart 74.11
239 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
240 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.69
241 TestStartStop/group/old-k8s-version/serial/Stop 20.95
242 TestStartStop/group/default-k8s-different-port/serial/DeployApp 7.64
243 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.98
244 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
245 TestStartStop/group/old-k8s-version/serial/SecondStart 661.12
246 TestStartStop/group/default-k8s-different-port/serial/Stop 20.78
247 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.22
248 TestStartStop/group/default-k8s-different-port/serial/SecondStart 348.57
249 TestStartStop/group/no-preload/serial/DeployApp 9.49
250 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.74
252 TestStartStop/group/embed-certs/serial/DeployApp 7.55
253 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.76
254 TestStartStop/group/embed-certs/serial/Stop 21.09
255 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
256 TestStartStop/group/embed-certs/serial/SecondStart 351.19
257 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
258 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.09
259 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.31
260 TestStartStop/group/default-k8s-different-port/serial/Pause 2.96
262 TestStartStop/group/newest-cni/serial/FirstStart 49.77
263 TestStartStop/group/newest-cni/serial/DeployApp 0
264 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.54
265 TestStartStop/group/newest-cni/serial/Stop 20.72
266 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
267 TestStartStop/group/newest-cni/serial/SecondStart 26.01
268 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
269 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
270 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
271 TestStartStop/group/newest-cni/serial/Pause 2.64
272 TestNetworkPlugins/group/auto/Start 69.29
273 TestNetworkPlugins/group/auto/KubeletFlags 0.32
274 TestNetworkPlugins/group/auto/NetCatPod 9.46
275 TestNetworkPlugins/group/auto/DNS 0.16
276 TestNetworkPlugins/group/auto/Localhost 0.16
277 TestNetworkPlugins/group/auto/HairPin 0.16
278 TestNetworkPlugins/group/custom-weave/Start 71.62
279 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
280 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
281 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
283 TestNetworkPlugins/group/cilium/Start 94.25
284 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.3
285 TestNetworkPlugins/group/custom-weave/NetCatPod 9.26
286 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.26
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.34
290 TestStartStop/group/old-k8s-version/serial/Pause 3.07
291 TestNetworkPlugins/group/enable-default-cni/Start 90.41
292 TestNetworkPlugins/group/cilium/ControllerPod 5.02
293 TestNetworkPlugins/group/cilium/KubeletFlags 0.3
294 TestNetworkPlugins/group/cilium/NetCatPod 9.28
295 TestNetworkPlugins/group/cilium/DNS 0.19
296 TestNetworkPlugins/group/cilium/Localhost 0.21
297 TestNetworkPlugins/group/cilium/HairPin 0.23
298 TestNetworkPlugins/group/kindnet/Start 68.4
299 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
300 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.27
302 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
304 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
305 TestNetworkPlugins/group/kindnet/DNS 0.17
306 TestNetworkPlugins/group/kindnet/Localhost 0.27
307 TestNetworkPlugins/group/kindnet/HairPin 0.27
308 TestNetworkPlugins/group/bridge/Start 47.08
309 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
310 TestNetworkPlugins/group/bridge/NetCatPod 10.26
311 TestNetworkPlugins/group/bridge/DNS 0.17
312 TestNetworkPlugins/group/bridge/Localhost 0.15
313 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.14.0/json-events (5.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235441-676638 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235441-676638 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.04934373s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (5.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210812235441-676638
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210812235441-676638: exit status 85 (70.586784ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 23:54:41
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 23:54:41.493585  676650 out.go:298] Setting OutFile to fd 1 ...
	I0812 23:54:41.493677  676650 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:54:41.493681  676650 out.go:311] Setting ErrFile to fd 2...
	I0812 23:54:41.493684  676650 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:54:41.493810  676650 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0812 23:54:41.493940  676650 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0812 23:54:41.494175  676650 out.go:305] Setting JSON to true
	I0812 23:54:41.531870  676650 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":13043,"bootTime":1628799438,"procs":208,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0812 23:54:41.531981  676650 start.go:121] virtualization: kvm guest
	I0812 23:54:41.535315  676650 notify.go:169] Checking for updates...
	I0812 23:54:41.537582  676650 driver.go:335] Setting default libvirt URI to qemu:///system
	I0812 23:54:41.588028  676650 docker.go:132] docker version: linux-19.03.15
	I0812 23:54:41.588147  676650 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 23:54:41.674900  676650 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-12 23:54:41.62455182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0812 23:54:41.674999  676650 docker.go:244] overlay module found
	I0812 23:54:41.677527  676650 start.go:278] selected driver: docker
	I0812 23:54:41.677550  676650 start.go:751] validating driver "docker" against <nil>
	I0812 23:54:41.678023  676650 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 23:54:41.762212  676650 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-12 23:54:41.714853728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0812 23:54:41.762358  676650 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0812 23:54:41.762999  676650 start_flags.go:344] Using suggested 8000MB memory alloc based on sys=32179MB, container=32179MB
	I0812 23:54:41.763114  676650 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0812 23:54:41.763140  676650 cni.go:93] Creating CNI manager for ""
	I0812 23:54:41.763148  676650 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0812 23:54:41.763160  676650 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0812 23:54:41.763194  676650 start_flags.go:277] config:
	{Name:download-only-20210812235441-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210812235441-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:54:41.765623  676650 cache.go:117] Beginning downloading kic base image for docker with crio
	I0812 23:54:41.767311  676650 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0812 23:54:41.767352  676650 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0812 23:54:41.802484  676650 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0812 23:54:41.802532  676650 cache.go:56] Caching tarball of preloaded images
	I0812 23:54:41.802850  676650 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0812 23:54:41.805435  676650 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 23:54:41.845126  676650 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:70b8731eaaa1b4de2d1cd60021fc1260 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4
	I0812 23:54:41.858643  676650 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0812 23:54:41.858690  676650 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0812 23:54:44.947672  676650 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 23:54:44.947775  676650 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 23:54:46.119492  676650 cache.go:59] Finished verifying existence of preloaded tar for  v1.14.0 on crio
	I0812 23:54:46.119825  676650 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/download-only-20210812235441-676638/config.json ...
	I0812 23:54:46.119860  676650 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/download-only-20210812235441-676638/config.json: {Name:mk4582463fff7bd48f64dcde736b06779a002084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0812 23:54:46.120124  676650 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime crio
	I0812 23:54:46.120461  676650 download.go:92] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/linux/v1.14.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210812235441-676638"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (7.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235441-676638 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235441-676638 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.7744459s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (7.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210812235441-676638
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210812235441-676638: exit status 85 (69.888614ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 23:54:46
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 23:54:46.617815  676792 out.go:298] Setting OutFile to fd 1 ...
	I0812 23:54:46.618039  676792 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:54:46.618050  676792 out.go:311] Setting ErrFile to fd 2...
	I0812 23:54:46.618056  676792 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:54:46.618183  676792 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0812 23:54:46.618316  676792 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0812 23:54:46.618466  676792 out.go:305] Setting JSON to true
	I0812 23:54:46.655164  676792 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":13048,"bootTime":1628799438,"procs":206,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0812 23:54:46.655290  676792 start.go:121] virtualization: kvm guest
	I0812 23:54:46.658360  676792 notify.go:169] Checking for updates...
	W0812 23:54:46.660867  676792 start.go:659] api.Load failed for download-only-20210812235441-676638: filestore "download-only-20210812235441-676638": Docker machine "download-only-20210812235441-676638" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 23:54:46.660918  676792 driver.go:335] Setting default libvirt URI to qemu:///system
	W0812 23:54:46.660965  676792 start.go:659] api.Load failed for download-only-20210812235441-676638: filestore "download-only-20210812235441-676638": Docker machine "download-only-20210812235441-676638" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 23:54:46.707595  676792 docker.go:132] docker version: linux-19.03.15
	I0812 23:54:46.707720  676792 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 23:54:46.791424  676792 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-12 23:54:46.743729167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0812 23:54:46.791513  676792 docker.go:244] overlay module found
	I0812 23:54:46.793935  676792 start.go:278] selected driver: docker
	I0812 23:54:46.793961  676792 start.go:751] validating driver "docker" against &{Name:download-only-20210812235441-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210812235441-676638 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:54:46.794538  676792 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 23:54:46.877137  676792 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-12 23:54:46.830416577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0812 23:54:46.877698  676792 cni.go:93] Creating CNI manager for ""
	I0812 23:54:46.877716  676792 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0812 23:54:46.877724  676792 start_flags.go:277] config:
	{Name:download-only-20210812235441-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210812235441-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:54:46.880026  676792 cache.go:117] Beginning downloading kic base image for docker with crio
	I0812 23:54:46.881793  676792 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0812 23:54:46.881830  676792 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0812 23:54:46.917915  676792 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0812 23:54:46.917953  676792 cache.go:56] Caching tarball of preloaded images
	I0812 23:54:46.918280  676792 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime crio
	I0812 23:54:46.920613  676792 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4 ...
	I0812 23:54:46.959031  676792 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:5b844d0f443dc130a4f324a367701516 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-cri-o-overlay-amd64.tar.lz4
	I0812 23:54:46.970115  676792 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0812 23:54:46.970142  676792 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210812235441-676638"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (13.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235441-676638 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210812235441-676638 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.928078563s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (13.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210812235441-676638
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210812235441-676638: exit status 85 (68.839061ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/12 23:54:54
	Running on machine: debian-jenkins-agent-12
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0812 23:54:54.461181  676930 out.go:298] Setting OutFile to fd 1 ...
	I0812 23:54:54.461324  676930 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:54:54.461334  676930 out.go:311] Setting ErrFile to fd 2...
	I0812 23:54:54.461339  676930 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0812 23:54:54.461465  676930 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0812 23:54:54.461601  676930 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0812 23:54:54.461812  676930 out.go:305] Setting JSON to true
	I0812 23:54:54.499821  676930 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":13056,"bootTime":1628799438,"procs":206,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0812 23:54:54.499970  676930 start.go:121] virtualization: kvm guest
	I0812 23:54:54.503960  676930 notify.go:169] Checking for updates...
	W0812 23:54:54.506516  676930 start.go:659] api.Load failed for download-only-20210812235441-676638: filestore "download-only-20210812235441-676638": Docker machine "download-only-20210812235441-676638" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 23:54:54.506574  676930 driver.go:335] Setting default libvirt URI to qemu:///system
	W0812 23:54:54.506620  676930 start.go:659] api.Load failed for download-only-20210812235441-676638: filestore "download-only-20210812235441-676638": Docker machine "download-only-20210812235441-676638" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0812 23:54:54.554898  676930 docker.go:132] docker version: linux-19.03.15
	I0812 23:54:54.555022  676930 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 23:54:54.640146  676930 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-12 23:54:54.591596269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0812 23:54:54.640282  676930 docker.go:244] overlay module found
	I0812 23:54:54.642656  676930 start.go:278] selected driver: docker
	I0812 23:54:54.642680  676930 start.go:751] validating driver "docker" against &{Name:download-only-20210812235441-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210812235441-676638 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:54:54.643353  676930 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0812 23:54:54.727438  676930 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:59 SystemTime:2021-08-12 23:54:54.67950728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0812 23:54:54.727981  676930 cni.go:93] Creating CNI manager for ""
	I0812 23:54:54.727997  676930 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
	I0812 23:54:54.728007  676930 start_flags.go:277] config:
	{Name:download-only-20210812235441-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210812235441-676638 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0812 23:54:54.730436  676930 cache.go:117] Beginning downloading kic base image for docker with crio
	I0812 23:54:54.732140  676930 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0812 23:54:54.732189  676930 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0812 23:54:54.771959  676930 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0812 23:54:54.771989  676930 cache.go:56] Caching tarball of preloaded images
	I0812 23:54:54.772290  676930 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime crio
	I0812 23:54:54.774555  676930 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0812 23:54:54.814928  676930 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:c7902b63f7bbc786f5f337da25a17477 -> /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0812 23:54:54.822474  676930 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0812 23:54:54.822512  676930 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210812235441-676638"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (3.04s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
aaa_download_only_test.go:189: (dbg) Done: out/minikube-linux-amd64 delete --all: (3.044109254s)
--- PASS: TestDownloadOnly/DeleteAll (3.04s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210812235441-676638
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.40s)

                                                
                                    
x
+
TestDownloadOnlyKic (10.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20210812235512-676638 --force --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20210812235512-676638 --force --alsologtostderr --driver=docker  --container-runtime=crio: (8.805673087s)
helpers_test.go:176: Cleaning up "download-docker-20210812235512-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20210812235512-676638
--- PASS: TestDownloadOnlyKic (10.44s)

                                                
                                    
x
+
TestOffline (115.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20210813002640-676638 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20210813002640-676638 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m51.650824331s)
helpers_test.go:176: Cleaning up "offline-crio-20210813002640-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-20210813002640-676638

                                                
                                                
=== CONT  TestOffline
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20210813002640-676638: (3.382934022s)
--- PASS: TestOffline (115.03s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 14.14512ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-8jjjh" [a711703f-7521-471b-8ea3-0734e481ba15] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008640315s
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-pd29p" [3569e920-9b07-4802-8b75-8e33d9e8fa45] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008330907s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210812235522-676638 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210812235522-676638 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Done: kubectl --context addons-20210812235522-676638 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.66486286s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235522-676638 ip
2021/08/12 23:58:32 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235522-676638 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (23.49s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 2.7407ms
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:343: "metrics-server-77c99ccb96-hh2jp" [443aa982-36d7-45da-ac72-9d6fa4915ee6] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.070143915s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210812235522-676638 top pods -n kube-system
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235522-676638 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.6s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: tiller-deploy stabilized in 66.051892ms
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-768d69497-zrqlj" [bfb7f0b2-ab6d-45af-912b-f2e37c637853] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010122854s
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210812235522-676638 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210812235522-676638 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (4.527999949s)
addons_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235522-676638 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.60s)

                                                
                                    
x
+
TestAddons/parallel/Olm (71.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 14.439519ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:467: olm-operator stabilized in 17.763509ms
addons_test.go:471: packageserver stabilized in 20.509399ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...
helpers_test.go:343: "catalog-operator-75d496484d-zt4zw" [f98affc3-d8e1-4390-83ff-ed36b797cd4a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.006560226s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...
helpers_test.go:343: "olm-operator-859c88c96-dz428" [84e4c180-55dc-4303-ab54-5136efb9b4d0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.006987951s
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:343: "packageserver-745f6f5d79-hrgv8" [5ce5a374-f323-4aa7-ba85-a6273ea6f354] Running
helpers_test.go:343: "packageserver-745f6f5d79-ss4c2" [1f53f35f-0bde-4275-b62c-b81b2c28582b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-745f6f5d79-hrgv8" [5ce5a374-f323-4aa7-ba85-a6273ea6f354] Running
helpers_test.go:343: "packageserver-745f6f5d79-ss4c2" [1f53f35f-0bde-4275-b62c-b81b2c28582b] Running
helpers_test.go:343: "packageserver-745f6f5d79-hrgv8" [5ce5a374-f323-4aa7-ba85-a6273ea6f354] Running
helpers_test.go:343: "packageserver-745f6f5d79-ss4c2" [1f53f35f-0bde-4275-b62c-b81b2c28582b] Running
helpers_test.go:343: "packageserver-745f6f5d79-hrgv8" [5ce5a374-f323-4aa7-ba85-a6273ea6f354] Running
helpers_test.go:343: "packageserver-745f6f5d79-ss4c2" [1f53f35f-0bde-4275-b62c-b81b2c28582b] Running
helpers_test.go:343: "packageserver-745f6f5d79-hrgv8" [5ce5a374-f323-4aa7-ba85-a6273ea6f354] Running
helpers_test.go:343: "packageserver-745f6f5d79-ss4c2" [1f53f35f-0bde-4275-b62c-b81b2c28582b] Running
helpers_test.go:343: "packageserver-745f6f5d79-hrgv8" [5ce5a374-f323-4aa7-ba85-a6273ea6f354] Running
addons_test.go:479: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.008493923s
addons_test.go:482: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:343: "operatorhubio-catalog-5s8mk" [164aeee8-864a-4777-955c-936e806fcb14] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.014362751s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210812235522-676638 create -f testdata/etcd.yaml
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235522-676638 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210812235522-676638 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235522-676638 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210812235522-676638 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235522-676638 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210812235522-676638 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235522-676638 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210812235522-676638 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235522-676638 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210812235522-676638 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (71.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 14.744637ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210812235522-676638 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210812235522-676638 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210812235522-676638 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210812235522-676638 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [9aba5a7e-5fdc-4df4-bb31-35cdbf2f57df] Pending
helpers_test.go:343: "task-pv-pod" [9aba5a7e-5fdc-4df4-bb31-35cdbf2f57df] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [9aba5a7e-5fdc-4df4-bb31-35cdbf2f57df] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 26.0077089s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210812235522-676638 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210812235522-676638 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210812235522-676638 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210812235522-676638 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-20210812235522-676638 delete pod task-pv-pod: (2.904923808s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210812235522-676638 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210812235522-676638 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210812235522-676638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210812235522-676638 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [6a1f0e74-ab7d-4ba5-8e4a-0450f24c7c99] Pending
helpers_test.go:343: "task-pv-pod-restore" [6a1f0e74-ab7d-4ba5-8e4a-0450f24c7c99] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [6a1f0e74-ab7d-4ba5-8e4a-0450f24c7c99] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 24.007742086s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210812235522-676638 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-20210812235522-676638 delete pod task-pv-pod-restore: (1.806454347s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210812235522-676638 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210812235522-676638 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235522-676638 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-20210812235522-676638 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.196276443s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235522-676638 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (66.42s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (48.74s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210812235522-676638 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [cbfb85a3-5882-4b80-960a-c8fa075aa933] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [cbfb85a3-5882-4b80-960a-c8fa075aa933] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [cbfb85a3-5882-4b80-960a-c8fa075aa933] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 8.006413443s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210812235522-676638 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210812235522-676638 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:683: (dbg) Run:  kubectl --context addons-20210812235522-676638 apply -f testdata/private-image.yaml
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7ff9c8c74f-fhjrq" [d0538427-4e1e-44a6-9910-dc0294c82cd0] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-fhjrq" [d0538427-4e1e-44a6-9910-dc0294c82cd0] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 13.008970448s
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210812235522-676638 apply -f testdata/private-image-eu.yaml
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-7s9x9" [f042ba97-2f9c-4314-afdc-492d85fac554] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-7s9x9" [f042ba97-2f9c-4314-afdc-492d85fac554] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 13.007060458s
addons_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210812235522-676638 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:709: (dbg) Done: out/minikube-linux-amd64 -p addons-20210812235522-676638 addons disable gcp-auth --alsologtostderr -v=1: (13.712466899s)
--- PASS: TestAddons/parallel/GCPAuth (48.74s)

                                                
                                    
x
+
TestCertOptions (36.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210813003004-676638 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210813003004-676638 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.961089573s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210813003004-676638 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210813003004-676638 config view
helpers_test.go:176: Cleaning up "cert-options-20210813003004-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210813003004-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210813003004-676638: (3.61076943s)
--- PASS: TestCertOptions (36.93s)

                                                
                                    
x
+
TestForceSystemdFlag (37.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210813002927-676638 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210813002927-676638 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.312515339s)
helpers_test.go:176: Cleaning up "force-systemd-flag-20210813002927-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210813002927-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210813002927-676638: (3.824741493s)
--- PASS: TestForceSystemdFlag (37.14s)

                                                
                                    
x
+
TestForceSystemdEnv (44.07s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210813002933-676638 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0813 00:29:44.647915  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210813002933-676638 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.435319648s)
helpers_test.go:176: Cleaning up "force-systemd-env-20210813002933-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210813002933-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210813002933-676638: (5.631435013s)
--- PASS: TestForceSystemdEnv (44.07s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.87s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.87s)

                                                
                                    
x
+
TestErrorSpam/setup (28.59s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210813000430-676638 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813000430-676638 --driver=docker  --container-runtime=crio
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210813000430-676638 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813000430-676638 --driver=docker  --container-runtime=crio: (28.590424179s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (28.59s)

                                                
                                    
x
+
TestErrorSpam/start (1s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 start --dry-run
--- PASS: TestErrorSpam/start (1.00s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (2.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 pause: (1.582211468s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 pause
--- PASS: TestErrorSpam/pause (2.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.31s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 unpause
error_spam_test.go:179: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 unpause: (5.336424727s)
--- PASS: TestErrorSpam/unpause (6.31s)

                                                
                                    
x
+
TestErrorSpam/stop (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 stop: (1.39005847s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210813000430-676638 --log_dir /tmp/nospam-20210813000430-676638 stop
--- PASS: TestErrorSpam/stop (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/test/nested/copy/676638/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813000517-676638 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:1982: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813000517-676638 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m38.731982569s)
--- PASS: TestFunctional/serial/StartWithProxy (98.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813000517-676638 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813000517-676638 --alsologtostderr -v=8: (5.509252945s)
functional_test.go:631: soft start took 5.510011179s for "functional-20210813000517-676638" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210813000517-676638 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813000517-676638 cache add k8s.gcr.io/pause:3.3: (1.150411045s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813000517-676638 cache add k8s.gcr.io/pause:latest: (1.200533878s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210813000517-676638 /tmp/functional-20210813000517-676638615079270
functional_test.go:1024: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 cache add minikube-local-cache-test:functional-20210813000517-676638
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 cache delete minikube-local-cache-test:functional-20210813000517-676638
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210813000517-676638
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (303.284276ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813000517-676638 cache reload: (1.065173982s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 kubectl -- --context functional-20210813000517-676638 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210813000517-676638 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (70.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813000517-676638 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0813 00:08:10.031415  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:08:10.037911  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:08:10.048081  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:08:10.068376  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:08:10.108710  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:08:10.189003  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:08:10.349447  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:08:10.670045  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:08:11.310448  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:08:12.591170  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:08:15.153008  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
functional_test.go:715: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210813000517-676638 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m10.32048999s)
functional_test.go:719: restart took 1m10.320611621s for "functional-20210813000517-676638" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (70.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210813000517-676638 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 logs
functional_test.go:1165: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813000517-676638 logs: (1.194618074s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 logs --file /tmp/functional-20210813000517-676638006616973/logs.txt
E0813 00:08:20.273467  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
functional_test.go:1181: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813000517-676638 logs --file /tmp/functional-20210813000517-676638006616973/logs.txt: (1.207166817s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813000517-676638 config get cpus: exit status 14 (92.450262ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813000517-676638 config get cpus: exit status 14 (68.526771ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210813000517-676638 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:862: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210813000517-676638 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:507: unable to kill pid 718716: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813000517-676638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210813000517-676638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (359.404305ms)

                                                
                                                
-- stdout --
	* [functional-20210813000517-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:08:23.283512  718188 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:08:23.283607  718188 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:08:23.283637  718188 out.go:311] Setting ErrFile to fd 2...
	I0813 00:08:23.283640  718188 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:08:23.283754  718188 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:08:23.284011  718188 out.go:305] Setting JSON to false
	I0813 00:08:23.342316  718188 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":13865,"bootTime":1628799438,"procs":251,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:08:23.342443  718188 start.go:121] virtualization: kvm guest
	I0813 00:08:23.345889  718188 out.go:177] * [functional-20210813000517-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:08:23.347632  718188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:08:23.392824  718188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:08:23.394672  718188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:08:23.396666  718188 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:08:23.397880  718188 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:08:23.462106  718188 docker.go:132] docker version: linux-19.03.15
	I0813 00:08:23.462231  718188 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:08:23.567226  718188 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2021-08-13 00:08:23.50530017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:08:23.567320  718188 docker.go:244] overlay module found
	I0813 00:08:23.569719  718188 out.go:177] * Using the docker driver based on existing profile
	I0813 00:08:23.569757  718188 start.go:278] selected driver: docker
	I0813 00:08:23.569769  718188 start.go:751] validating driver "docker" against &{Name:functional-20210813000517-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210813000517-676638 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-
provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:08:23.569895  718188 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 00:08:23.569974  718188 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:08:23.569998  718188 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 00:08:23.571629  718188 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:08:23.573655  718188 out.go:177] 
	W0813 00:08:23.573779  718188 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0813 00:08:23.575179  718188 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813000517-676638 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210813000517-676638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210813000517-676638 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (321.110177ms)

                                                
                                                
-- stdout --
	* [functional-20210813000517-676638] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:08:22.973088  718045 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:08:22.973207  718045 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:08:22.973259  718045 out.go:311] Setting ErrFile to fd 2...
	I0813 00:08:22.973263  718045 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:08:22.973436  718045 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:08:22.973759  718045 out.go:305] Setting JSON to false
	I0813 00:08:23.022765  718045 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":13865,"bootTime":1628799438,"procs":250,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:08:23.022891  718045 start.go:121] virtualization: kvm guest
	I0813 00:08:23.025623  718045 out.go:177] * [functional-20210813000517-676638] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	I0813 00:08:23.027357  718045 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:08:23.029122  718045 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:08:23.030964  718045 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:08:23.037291  718045 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:08:23.038512  718045 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:08:23.103452  718045 docker.go:132] docker version: linux-19.03.15
	I0813 00:08:23.103584  718045 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:08:23.208356  718045 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:64 SystemTime:2021-08-13 00:08:23.154481747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:08:23.208450  718045 docker.go:244] overlay module found
	I0813 00:08:23.211004  718045 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0813 00:08:23.211049  718045 start.go:278] selected driver: docker
	I0813 00:08:23.211058  718045 start.go:751] validating driver "docker" against &{Name:functional-20210813000517-676638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210813000517-676638 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-
provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 00:08:23.211212  718045 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 00:08:23.211275  718045 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:08:23.211304  718045 out.go:242] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0813 00:08:23.213166  718045 out.go:177]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:08:23.217299  718045 out.go:177] 
	W0813 00:08:23.217522  718045 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0813 00:08:23.219678  718045 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:826: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (15.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210813000517-676638 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210813000517-676638 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-5sqwz" [547429f0-9ac8-4912-9713-80ac7692b958] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-5sqwz" [547429f0-9ac8-4912-9713-80ac7692b958] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 13.014935707s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1372: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813000517-676638 service list: (1.385624713s)
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 service --namespace=default --https --url hello-node
functional_test.go:1394: found endpoint: https://192.168.49.2:30033
functional_test.go:1405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 service hello-node --url --format={{.IP}}
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 service hello-node --url
functional_test.go:1420: found endpoint for hello-node: http://192.168.49.2:30033
functional_test.go:1431: Attempting to fetch http://192.168.49.2:30033 ...
functional_test.go:1450: http://192.168.49.2:30033: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-5sqwz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30033
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (15.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [4026b086-352f-44e3-adc8-934305350d4d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008167325s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210813000517-676638 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210813000517-676638 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210813000517-676638 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210813000517-676638 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210813000517-676638 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [15ee731d-39b9-4b26-bec3-93ccb2c0eb72] Pending
helpers_test.go:343: "sp-pod" [15ee731d-39b9-4b26-bec3-93ccb2c0eb72] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [15ee731d-39b9-4b26-bec3-93ccb2c0eb72] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.008686479s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210813000517-676638 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210813000517-676638 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210813000517-676638 delete -f testdata/storage-provisioner/pod.yaml: (1.671578139s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210813000517-676638 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [6a8de7eb-018c-4b64-b5bb-84b83015b244] Pending
helpers_test.go:343: "sp-pod" [6a8de7eb-018c-4b64-b5bb-84b83015b244] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [6a8de7eb-018c-4b64-b5bb-84b83015b244] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010890176s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210813000517-676638 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.79s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210813000517-676638 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-9bbbc5bbb-x472c" [121c4423-e97e-4d82-9ced-04ca34f095e3] Pending
helpers_test.go:343: "mysql-9bbbc5bbb-x472c" [121c4423-e97e-4d82-9ced-04ca34f095e3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2021/08/13 00:08:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-x472c" [121c4423-e97e-4d82-9ced-04ca34f095e3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.013311188s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813000517-676638 exec mysql-9bbbc5bbb-x472c -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813000517-676638 exec mysql-9bbbc5bbb-x472c -- mysql -ppassword -e "show databases;": exit status 1 (171.627265ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813000517-676638 exec mysql-9bbbc5bbb-x472c -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813000517-676638 exec mysql-9bbbc5bbb-x472c -- mysql -ppassword -e "show databases;": exit status 1 (170.582456ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813000517-676638 exec mysql-9bbbc5bbb-x472c -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210813000517-676638 exec mysql-9bbbc5bbb-x472c -- mysql -ppassword -e "show databases;": exit status 1 (182.657102ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210813000517-676638 exec mysql-9bbbc5bbb-x472c -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/676638/hosts within VM

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1679: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo cat /etc/test/nested/copy/676638/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/676638.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo cat /etc/ssl/certs/676638.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/676638.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo cat /usr/share/ca-certificates/676638.pem"
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/6766382.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo cat /etc/ssl/certs/6766382.pem"
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/6766382.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo cat /usr/share/ca-certificates/6766382.pem"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0813 00:08:50.995059  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CertSync (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210813000517-676638 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210813000517-676638
functional_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 image load docker.io/library/busybox:load-functional-20210813000517-676638

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813000517-676638 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210813000517-676638
--- PASS: TestFunctional/parallel/LoadImage (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210813000517-676638
functional_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 image load docker.io/library/busybox:remove-functional-20210813000517-676638

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813000517-676638 image load docker.io/library/busybox:remove-functional-20210813000517-676638: (2.800781192s)
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 image rm docker.io/library/busybox:remove-functional-20210813000517-676638
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813000517-676638 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210813000517-676638
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210813000517-676638
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 image load /home/jenkins/workspace/Docker_Linux_crio_integration/busybox.tar

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813000517-676638 -- sudo crictl images
--- PASS: TestFunctional/parallel/LoadImageFromFile (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 image build -t localhost/my-image:functional-20210813000517-676638 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-20210813000517-676638 image build -t localhost/my-image:functional-20210813000517-676638 testdata/build: (3.177951235s)
functional_test.go:412: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210813000517-676638 image build -t localhost/my-image:functional-20210813000517-676638 testdata/build:
STEP 1: FROM busybox
STEP 2: RUN true
--> 36ce78ef72c
STEP 3: ADD content.txt /
STEP 4: COMMIT localhost/my-image:functional-20210813000517-676638
--> 34421215c69
Successfully tagged localhost/my-image:functional-20210813000517-676638
34421215c697871f138db44f6fe46884aa632334b5862ac6cbff709f10ea2101
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210813000517-676638 -- sudo crictl inspecti localhost/my-image:functional-20210813000517-676638
--- PASS: TestFunctional/parallel/BuildImage (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 image ls
functional_test.go:446: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210813000517-676638 image ls:
localhost/minikube-local-cache-test:functional-20210813000517-676638
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/mysql:5.7
docker.io/library/busybox:1.28.4-glibc
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo systemctl is-active docker"
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo systemctl is-active docker": exit status 1 (381.061613ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo systemctl is-active containerd"
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo systemctl is-active containerd": exit status 1 (402.341147ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210813000517-676638 /tmp/mounttest043274632:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628813301748919221" to /tmp/mounttest043274632/created-by-test
functional_test_mount_test.go:110: wrote "test-1628813301748919221" to /tmp/mounttest043274632/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628813301748919221" to /tmp/mounttest043274632/test-1628813301748919221
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (338.636218ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 13 00:08 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 13 00:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 13 00:08 test-1628813301748919221
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh cat /mount-9p/test-1628813301748919221

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210813000517-676638 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [d866701f-2086-4c07-b8b1-67c205238a03] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [d866701f-2086-4c07-b8b1-67c205238a03] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [d866701f-2086-4c07-b8b1-67c205238a03] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.061651352s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210813000517-676638 logs busybox-mount

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo umount -f /mount-9p"
E0813 00:08:30.514512  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813000517-676638 /tmp/mounttest043274632:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1245: Took "355.311162ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1259: Took "68.330026ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1295: Took "456.630493ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1308: Took "93.959859ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210813000517-676638 /tmp/mounttest572280647:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.103247ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813000517-676638 /tmp/mounttest572280647:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh "sudo umount -f /mount-9p": exit status 1 (298.222351ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20210813000517-676638 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210813000517-676638 /tmp/mounttest572280647:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210813000517-676638 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210813000517-676638 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.110.179.201 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210813000517-676638 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210813000517-676638 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210813000517-676638
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210813000517-676638
--- PASS: TestFunctional/delete_busybox_image (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210813000517-676638
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210813000517-676638
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.55s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210813001032-676638 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210813001032-676638 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (106.834113ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210813001032-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"3cb6a308-7c68-48f6-938c-3a4794606dbb","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig"},"datacontenttype":"application/json","id":"147a33f9-bd98-43f0-98ae-6ab5c1c44025","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"010e62ca-c26e-41a5-bd0b-74316a097019","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube"},"datacontenttype":"application/json","id":"c8d441ae-48f8-4dab-b344-2849c8f214cd","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"a22c7151-0ca2-4b2a-8e60-eb18967e6728","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"4657467b-5cbd-4287-971d-b839c8e075c0","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210813001032-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210813001032-676638
--- PASS: TestErrorJSONOutput (0.55s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210813001033-676638 --network=
E0813 00:10:53.877845  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210813001033-676638 --network=: (28.872571911s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210813001033-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210813001033-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210813001033-676638: (2.687841047s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210813001104-676638 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210813001104-676638 --network=bridge: (23.55503463s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210813001104-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210813001104-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210813001104-676638: (2.53511725s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.15s)

                                                
                                    
x
+
TestKicExistingNetwork (26.8s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20210813001130-676638 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20210813001130-676638 --network=existing-network: (23.66695849s)
helpers_test.go:176: Cleaning up "existing-network-20210813001130-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20210813001130-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20210813001130-676638: (2.685594654s)
kic_custom_network_test.go:82: error deleting kic network, may need to delete manually: [unable to delete a network that is attached to a running container unable to delete a network that is attached to a running container]
--- PASS: TestKicExistingNetwork (26.80s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (120.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813001157-676638 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0813 00:13:10.031793  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:13:21.598260  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:21.603609  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:21.614067  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:21.634432  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:21.674812  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:21.755155  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:21.915499  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:22.236093  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:22.877159  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:24.157749  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:26.718039  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:31.838609  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
E0813 00:13:37.718233  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:13:42.079782  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813001157-676638 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m59.705084317s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (120.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (32.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- rollout status deployment/busybox
E0813 00:14:02.560361  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
multinode_test.go:467: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- rollout status deployment/busybox: (30.347517728s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-j4hzl -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-pzxgm -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-j4hzl -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-pzxgm -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-j4hzl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210813001157-676638 -- exec busybox-84b6686758-pzxgm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (32.57s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813001157-676638 -v 3 --alsologtostderr
E0813 00:14:43.520626  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210813001157-676638 -v 3 --alsologtostderr: (25.923878688s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.69s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 cp testdata/cp-test.txt multinode-20210813001157-676638-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 ssh -n multinode-20210813001157-676638-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 cp testdata/cp-test.txt multinode-20210813001157-676638-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 ssh -n multinode-20210813001157-676638-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813001157-676638 node stop m03: (1.367530168s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813001157-676638 status: exit status 7 (613.136456ms)

                                                
                                                
-- stdout --
	multinode-20210813001157-676638
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813001157-676638-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813001157-676638-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813001157-676638 status --alsologtostderr: exit status 7 (647.084073ms)

                                                
                                                
-- stdout --
	multinode-20210813001157-676638
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813001157-676638-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813001157-676638-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:15:05.803654  754942 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:15:05.803775  754942 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:15:05.803784  754942 out.go:311] Setting ErrFile to fd 2...
	I0813 00:15:05.803788  754942 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:15:05.803887  754942 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:15:05.804077  754942 out.go:305] Setting JSON to false
	I0813 00:15:05.804101  754942 mustload.go:65] Loading cluster: multinode-20210813001157-676638
	I0813 00:15:05.804383  754942 status.go:253] checking status of multinode-20210813001157-676638 ...
	I0813 00:15:05.804781  754942 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638 --format={{.State.Status}}
	I0813 00:15:05.847375  754942 status.go:328] multinode-20210813001157-676638 host status = "Running" (err=<nil>)
	I0813 00:15:05.847412  754942 host.go:66] Checking if "multinode-20210813001157-676638" exists ...
	I0813 00:15:05.847726  754942 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813001157-676638
	I0813 00:15:05.891588  754942 host.go:66] Checking if "multinode-20210813001157-676638" exists ...
	I0813 00:15:05.891992  754942 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:15:05.892075  754942 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638
	I0813 00:15:05.935154  754942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638/id_rsa Username:docker}
	I0813 00:15:06.026525  754942 ssh_runner.go:149] Run: systemctl --version
	I0813 00:15:06.030349  754942 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:15:06.040612  754942 kubeconfig.go:93] found "multinode-20210813001157-676638" server: "https://192.168.49.2:8443"
	I0813 00:15:06.040641  754942 api_server.go:164] Checking apiserver status ...
	I0813 00:15:06.040680  754942 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 00:15:06.060999  754942 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1301/cgroup
	I0813 00:15:06.069412  754942 api_server.go:180] apiserver freezer: "3:freezer:/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/system.slice/crio-19dd5e7863c367cd07166ca8faefc45bad3f141da8e1fd901416ef0375884b40.scope"
	I0813 00:15:06.069504  754942 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/57ce5ea60f50b0740935334c23f7978aea263acc238f7a360598b0af4f88ae80/system.slice/crio-19dd5e7863c367cd07166ca8faefc45bad3f141da8e1fd901416ef0375884b40.scope/freezer.state
	I0813 00:15:06.076636  754942 api_server.go:202] freezer state: "THAWED"
	I0813 00:15:06.076679  754942 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 00:15:06.081557  754942 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 00:15:06.081583  754942 status.go:419] multinode-20210813001157-676638 apiserver status = Running (err=<nil>)
	I0813 00:15:06.081594  754942 status.go:255] multinode-20210813001157-676638 status: &{Name:multinode-20210813001157-676638 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 00:15:06.081626  754942 status.go:253] checking status of multinode-20210813001157-676638-m02 ...
	I0813 00:15:06.081897  754942 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638-m02 --format={{.State.Status}}
	I0813 00:15:06.124858  754942 status.go:328] multinode-20210813001157-676638-m02 host status = "Running" (err=<nil>)
	I0813 00:15:06.124889  754942 host.go:66] Checking if "multinode-20210813001157-676638-m02" exists ...
	I0813 00:15:06.125161  754942 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813001157-676638-m02
	I0813 00:15:06.170372  754942 host.go:66] Checking if "multinode-20210813001157-676638-m02" exists ...
	I0813 00:15:06.170736  754942 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 00:15:06.170792  754942 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813001157-676638-m02
	I0813 00:15:06.211966  754942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33298 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813001157-676638-m02/id_rsa Username:docker}
	I0813 00:15:06.338310  754942 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 00:15:06.348301  754942 status.go:255] multinode-20210813001157-676638-m02 status: &{Name:multinode-20210813001157-676638-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0813 00:15:06.348342  754942 status.go:253] checking status of multinode-20210813001157-676638-m03 ...
	I0813 00:15:06.348712  754942 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638-m03 --format={{.State.Status}}
	I0813 00:15:06.392241  754942 status.go:328] multinode-20210813001157-676638-m03 host status = "Stopped" (err=<nil>)
	I0813 00:15:06.392273  754942 status.go:341] host is not running, skipping remaining checks
	I0813 00:15:06.392280  754942 status.go:255] multinode-20210813001157-676638-m03 status: &{Name:multinode-20210813001157-676638-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.63s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 node start m03 --alsologtostderr
multinode_test.go:235: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813001157-676638 node start m03 --alsologtostderr: (31.025295346s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (164.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813001157-676638
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210813001157-676638
E0813 00:16:05.444457  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
multinode_test.go:271: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210813001157-676638: (42.773977133s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813001157-676638 --wait=true -v=8 --alsologtostderr
E0813 00:18:10.033441  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:18:21.598513  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813001157-676638 --wait=true -v=8 --alsologtostderr: (2m1.176540536s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813001157-676638
--- PASS: TestMultiNode/serial/RestartKeepsNodes (164.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813001157-676638 node delete m03: (4.986525952s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 status --alsologtostderr
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (41.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 stop
E0813 00:18:49.285552  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210813001157-676638 stop: (41.325348478s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813001157-676638 status: exit status 7 (142.531404ms)

                                                
                                                
-- stdout --
	multinode-20210813001157-676638
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813001157-676638-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210813001157-676638 status --alsologtostderr: exit status 7 (140.106643ms)

                                                
                                                
-- stdout --
	multinode-20210813001157-676638
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813001157-676638-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:19:09.637411  768124 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:19:09.637535  768124 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:19:09.637544  768124 out.go:311] Setting ErrFile to fd 2...
	I0813 00:19:09.637548  768124 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:19:09.637680  768124 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:19:09.637864  768124 out.go:305] Setting JSON to false
	I0813 00:19:09.637886  768124 mustload.go:65] Loading cluster: multinode-20210813001157-676638
	I0813 00:19:09.638227  768124 status.go:253] checking status of multinode-20210813001157-676638 ...
	I0813 00:19:09.638632  768124 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638 --format={{.State.Status}}
	I0813 00:19:09.679367  768124 status.go:328] multinode-20210813001157-676638 host status = "Stopped" (err=<nil>)
	I0813 00:19:09.679395  768124 status.go:341] host is not running, skipping remaining checks
	I0813 00:19:09.679401  768124 status.go:255] multinode-20210813001157-676638 status: &{Name:multinode-20210813001157-676638 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 00:19:09.679428  768124 status.go:253] checking status of multinode-20210813001157-676638-m02 ...
	I0813 00:19:09.679746  768124 cli_runner.go:115] Run: docker container inspect multinode-20210813001157-676638-m02 --format={{.State.Status}}
	I0813 00:19:09.722097  768124 status.go:328] multinode-20210813001157-676638-m02 host status = "Stopped" (err=<nil>)
	I0813 00:19:09.722130  768124 status.go:341] host is not running, skipping remaining checks
	I0813 00:19:09.722138  768124 status.go:255] multinode-20210813001157-676638-m02 status: &{Name:multinode-20210813001157-676638-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (41.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (69.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813001157-676638 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:335: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813001157-676638 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.098465599s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210813001157-676638 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (69.87s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210813001157-676638
multinode_test.go:433: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813001157-676638-m02 --driver=docker  --container-runtime=crio
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210813001157-676638-m02 --driver=docker  --container-runtime=crio: exit status 14 (116.561947ms)

                                                
                                                
-- stdout --
	* [multinode-20210813001157-676638-m02] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210813001157-676638-m02' is duplicated with machine name 'multinode-20210813001157-676638-m02' in profile 'multinode-20210813001157-676638'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210813001157-676638-m03 --driver=docker  --container-runtime=crio
multinode_test.go:441: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210813001157-676638-m03 --driver=docker  --container-runtime=crio: (32.669166014s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210813001157-676638
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210813001157-676638: exit status 80 (287.350022ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210813001157-676638
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210813001157-676638-m03 already exists in multinode-20210813001157-676638-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210813001157-676638-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210813001157-676638-m03: (2.990568184s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.12s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.65s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (11.649279499s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.65s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.62s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.619644874s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.62s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (10.62s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.620438198s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (10.62s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.46s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (8.461070459s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.46s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (15.36s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (15.362074925s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (15.36s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (14.35s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (14.353738607s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (14.35s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (15.61s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (15.60602914s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (15.61s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (13.54s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_crio_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (13.544256102s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (13.54s)

                                                
                                    
x
+
TestInsufficientStorage (13.72s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20210813002627-676638 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20210813002627-676638 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.570954066s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210813002627-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"72bbc527-5218-428d-a7cf-b00207e16c2c","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig"},"datacontenttype":"application/json","id":"32507904-2503-4d4e-8dae-1c7aaaf65ed3","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"1547dbec-7612-489a-ba7f-1607fdd870b4","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube"},"datacontenttype":"application/json","id":"e7792dac-1028-4a58-8dab-644e02b05fb0","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"e43afb82-f466-4958-9fa6-b6ac13ee888f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"90a9ef9e-523f-43e9-a61b-24256fbf7eaa","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"11dc8fc5-398d-4ef8-bc58-70ca8fc2c917","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"4c041f3b-5d4b-4fa3-ad1e-4032c533e51d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"672e7f4f-997f-4830-8a96-f313218b6b69","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210813002627-676638 in cluster insufficient-storage-20210813002627-676638","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"f46e6ce0-2e05-4b29-ab5c-aa96979c8da9","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"30994088-d7f5-4bcd-970e-80c73d7ee693","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"5a5185f9-9185-4048-9d18-ba9d58985c56","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"b4d7049e-1a64-40d1-bf70-632b55d30f41","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210813002627-676638 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210813002627-676638 --output=json --layout=cluster: exit status 7 (296.866179ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210813002627-676638","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210813002627-676638","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 00:26:33.957439  818897 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210813002627-676638" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210813002627-676638 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210813002627-676638 --output=json --layout=cluster: exit status 7 (292.707524ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210813002627-676638","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210813002627-676638","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 00:26:34.251938  818957 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210813002627-676638" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	E0813 00:26:34.265198  818957 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/insufficient-storage-20210813002627-676638/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210813002627-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20210813002627-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20210813002627-676638: (6.562982479s)
--- PASS: TestInsufficientStorage (13.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (137.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002640-676638 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002640-676638 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.964500062s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813002640-676638
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210813002640-676638: (2.617533524s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210813002640-676638 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210813002640-676638 status --format={{.Host}}: exit status 7 (117.852301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002640-676638 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0813 00:28:10.030800  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002640-676638 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.982720296s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210813002640-676638 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002640-676638 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002640-676638 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=crio: exit status 106 (133.173475ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210813002640-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210813002640-676638
	    minikube start -p kubernetes-upgrade-20210813002640-676638 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813002640-6766382 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813002640-676638 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002640-676638 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210813002640-676638 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.258294282s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210813002640-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813002640-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210813002640-676638: (4.165287874s)
--- PASS: TestKubernetesUpgrade (137.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.21s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.149206891.exe start -p missing-upgrade-20210813002640-676638 --memory=2200 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Done: /tmp/minikube-v1.9.1.149206891.exe start -p missing-upgrade-20210813002640-676638 --memory=2200 --driver=docker  --container-runtime=crio: (1m37.253320479s)
version_upgrade_test.go:320: (dbg) Run:  docker stop missing-upgrade-20210813002640-676638
E0813 00:28:21.597948  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
version_upgrade_test.go:320: (dbg) Done: docker stop missing-upgrade-20210813002640-676638: (11.487897185s)
version_upgrade_test.go:325: (dbg) Run:  docker rm missing-upgrade-20210813002640-676638
version_upgrade_test.go:331: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20210813002640-676638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:331: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20210813002640-676638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.944923251s)
helpers_test.go:176: Cleaning up "missing-upgrade-20210813002640-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20210813002640-676638
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20210813002640-676638: (3.054867889s)
--- PASS: TestMissingContainerUpgrade (164.21s)

                                                
                                    
x
+
TestPause/serial/Start (104.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813002900-676638 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813002900-676638 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m44.799405709s)
--- PASS: TestPause/serial/Start (104.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210813002926-676638 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210813002926-676638 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (283.000666ms)

                                                
                                                
-- stdout --
	* [false-20210813002926-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 00:29:26.263938  858882 out.go:298] Setting OutFile to fd 1 ...
	I0813 00:29:26.264355  858882 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:29:26.264374  858882 out.go:311] Setting ErrFile to fd 2...
	I0813 00:29:26.264380  858882 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 00:29:26.264663  858882 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 00:29:26.266021  858882 out.go:305] Setting JSON to false
	I0813 00:29:26.304420  858882 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-12","uptime":15128,"bootTime":1628799438,"procs":270,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0813 00:29:26.304539  858882 start.go:121] virtualization: kvm guest
	I0813 00:29:26.307300  858882 out.go:177] * [false-20210813002926-676638] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0813 00:29:26.309130  858882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 00:29:26.307492  858882 notify.go:169] Checking for updates...
	I0813 00:29:26.310843  858882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 00:29:26.312465  858882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 00:29:26.314175  858882 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 00:29:26.314908  858882 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 00:29:26.372504  858882 docker.go:132] docker version: linux-19.03.15
	I0813 00:29:26.372622  858882 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 00:29:26.477929  858882 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:170 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:true NGoroutines:91 SystemTime:2021-08-13 00:29:26.415184618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0813 00:29:26.478071  858882 docker.go:244] overlay module found
	I0813 00:29:26.481207  858882 out.go:177] * Using the docker driver based on user configuration
	I0813 00:29:26.481277  858882 start.go:278] selected driver: docker
	I0813 00:29:26.481285  858882 start.go:751] validating driver "docker" against <nil>
	I0813 00:29:26.481310  858882 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 00:29:26.481376  858882 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 00:29:26.481402  858882 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 00:29:26.482923  858882 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 00:29:26.485132  858882 out.go:177] 
	W0813 00:29:26.485304  858882 out.go:242] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0813 00:29:26.486937  858882 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210813002926-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210813002926-676638
--- PASS: TestNetworkPlugins/group/false (0.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (103.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813003017-676638 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813003017-676638 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0: (1m43.630291659s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (103.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (137.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210813003041-676638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210813003041-676638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (2m17.064472426s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (137.06s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (11.39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210813002900-676638 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210813002900-676638 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (11.380627794s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (11.39s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813002900-676638 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210813002900-676638 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210813002900-676638 --output=json --layout=cluster: exit status 2 (350.920172ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210813002900-676638","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210813002900-676638","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210813002900-676638 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210813002900-676638 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20210813002900-676638 --alsologtostderr -v=5: (5.753989723s)
--- PASS: TestPause/serial/PauseAgain (5.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.44s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210813002900-676638 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210813002900-676638 --alsologtostderr -v=5: (4.443558745s)
--- PASS: TestPause/serial/DeletePaused (4.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (287.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813003107-676638 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813003107-676638 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (4m47.949025434s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (287.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210813002900-676638
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210813002900-676638: exit status 1 (49.286839ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210813002900-676638

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (74.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813003110-676638 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813003110-676638 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (1m14.113656348s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (74.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813003017-676638 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [e0823847-fbcd-11eb-8c03-0242aa08ceae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [e0823847-fbcd-11eb-8c03-0242aa08ceae] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.011733984s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813003017-676638 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210813003017-676638 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210813003017-676638 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210813003017-676638 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210813003017-676638 --alsologtostderr -v=3: (20.949006627s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813003110-676638 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [fb7dad16-9e36-45ea-82ac-cc70bef634fa] Pending
helpers_test.go:343: "busybox" [fb7dad16-9e36-45ea-82ac-cc70bef634fa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [fb7dad16-9e36-45ea-82ac-cc70bef634fa] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 7.011575623s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813003110-676638 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210813003110-676638 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210813003110-676638 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813003017-676638 -n old-k8s-version-20210813003017-676638
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813003017-676638 -n old-k8s-version-20210813003017-676638: exit status 7 (127.628471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210813003017-676638 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (661.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210813003017-676638 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210813003017-676638 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.14.0: (11m0.588282114s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210813003017-676638 -n old-k8s-version-20210813003017-676638
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (661.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813003110-676638 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210813003110-676638 --alsologtostderr -v=3: (20.776524233s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813003110-676638 -n default-k8s-different-port-20210813003110-676638
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813003110-676638 -n default-k8s-different-port-20210813003110-676638: exit status 7 (106.423977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210813003110-676638 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (348.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210813003110-676638 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210813003110-676638 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (5m48.061021699s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210813003110-676638 -n default-k8s-different-port-20210813003110-676638
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (348.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813003041-676638 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [45e1abf6-7aa2-405f-85fb-4c54ca630661] Pending
helpers_test.go:343: "busybox" [45e1abf6-7aa2-405f-85fb-4c54ca630661] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [45e1abf6-7aa2-405f-85fb-4c54ca630661] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.012682827s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813003041-676638 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210813003041-676638 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210813003041-676638 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813003107-676638 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [ae5c0585-b35c-4c55-b883-be2e37478470] Pending
helpers_test.go:343: "busybox" [ae5c0585-b35c-4c55-b883-be2e37478470] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [ae5c0585-b35c-4c55-b883-be2e37478470] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.013988137s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813003107-676638 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210813003107-676638 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210813003107-676638 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (21.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210813003107-676638 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210813003107-676638 --alsologtostderr -v=3: (21.093908868s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (21.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813003107-676638 -n embed-certs-20210813003107-676638
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813003107-676638 -n embed-certs-20210813003107-676638: exit status 7 (103.825637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210813003107-676638 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (351.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210813003107-676638 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3
E0813 00:38:10.031718  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
E0813 00:38:21.597898  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210813003107-676638 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.21.3: (5m50.749959285s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210813003107-676638 -n embed-certs-20210813003107-676638
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (351.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-bwv6c" [926a9acc-9b34-43e1-9a72-55976a270732] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013350322s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-bwv6c" [926a9acc-9b34-43e1-9a72-55976a270732] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006476866s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210813003110-676638 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210813003110-676638 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210813003110-676638 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813003110-676638 -n default-k8s-different-port-20210813003110-676638
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813003110-676638 -n default-k8s-different-port-20210813003110-676638: exit status 2 (356.476328ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210813003110-676638 -n default-k8s-different-port-20210813003110-676638
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210813003110-676638 -n default-k8s-different-port-20210813003110-676638: exit status 2 (384.128718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20210813003110-676638 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210813003110-676638 -n default-k8s-different-port-20210813003110-676638
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210813003110-676638 -n default-k8s-different-port-20210813003110-676638
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813003901-676638 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813003901-676638 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (49.769027123s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210813003901-676638 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210813003901-676638 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210813003901-676638 --alsologtostderr -v=3: (20.723461867s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813003901-676638 -n newest-cni-20210813003901-676638
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813003901-676638 -n newest-cni-20210813003901-676638: exit status 7 (108.25998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210813003901-676638 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210813003901-676638 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210813003901-676638 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.22.0-rc.0: (25.636114917s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210813003901-676638 -n newest-cni-20210813003901-676638
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210813003901-676638 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210813003901-676638 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210813003901-676638 -n newest-cni-20210813003901-676638
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210813003901-676638 -n newest-cni-20210813003901-676638: exit status 2 (328.459208ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210813003901-676638 -n newest-cni-20210813003901-676638
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210813003901-676638 -n newest-cni-20210813003901-676638: exit status 2 (332.495578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20210813003901-676638 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210813003901-676638 -n newest-cni-20210813003901-676638
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210813003901-676638 -n newest-cni-20210813003901-676638
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210813002925-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=crio
E0813 00:41:13.080604  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210813002925-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=crio: (1m9.288158098s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210813002925-676638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210813002925-676638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-d8dhr" [80768d99-4ba7-41a5-bdeb-46dd48f4beca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-d8dhr" [80768d99-4ba7-41a5-bdeb-46dd48f4beca] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.006324692s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210813002925-676638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210813002925-676638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (71.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210813002927-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210813002927-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=crio: (1m11.622467546s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (71.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-fqm5d" [a208d674-9151-445a-8368-919815e63b5a] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012564381s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-fqm5d" [a208d674-9151-445a-8368-919815e63b5a] Running
E0813 00:42:25.138281  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:42:25.143661  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:42:25.154064  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:42:25.174412  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:42:25.214727  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:42:25.295087  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:42:25.455840  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:42:25.776442  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:42:26.417131  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007117407s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210813003107-676638 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210813003107-676638 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (94.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210813002927-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=crio
E0813 00:42:45.620480  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:43:06.101485  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
E0813 00:43:10.031440  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210812235522-676638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210813002927-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=crio: (1m34.246167882s)
--- PASS: TestNetworkPlugins/group/cilium/Start (94.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210813002927-676638 "pgrep -a kubelet"
E0813 00:43:21.597610  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813000517-676638/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210813002927-676638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-mn8j4" [000dc3c4-e3a0-414a-ad3d-a60345fac225] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-mn8j4" [000dc3c4-e3a0-414a-ad3d-a60345fac225] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 9.006035679s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (9.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-pnlz6" [bfd4089d-fbce-11eb-905e-0242c0a85502] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01150138s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-pnlz6" [bfd4089d-fbce-11eb-905e-0242c0a85502] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006384159s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210813003017-676638 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210813003017-676638 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210813003017-676638 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813003017-676638 -n old-k8s-version-20210813003017-676638
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813003017-676638 -n old-k8s-version-20210813003017-676638: exit status 2 (377.130426ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210813003017-676638 -n old-k8s-version-20210813003017-676638
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210813003017-676638 -n old-k8s-version-20210813003017-676638: exit status 2 (366.23268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20210813003017-676638 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210813003017-676638 -n old-k8s-version-20210813003017-676638
E0813 00:43:47.062631  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210813003017-676638 -n old-k8s-version-20210813003017-676638
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210813002925-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210813002925-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m30.407080102s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-ttlzr" [2b285138-bb8f-4596-867c-c37822ad6a92] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.017426696s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210813002927-676638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210813002927-676638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-r6248" [88d64683-26c4-4d87-818a-ac9314ab01dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-r6248" [88d64683-26c4-4d87-818a-ac9314ab01dd] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.011691875s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210813002927-676638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210813002927-676638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210813002927-676638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210813002926-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=crio
E0813 00:45:08.983132  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813003110-676638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210813002926-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=crio: (1m8.395956615s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210813002925-676638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210813002925-676638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-2sb27" [e280c898-9192-435c-b4ce-4eb00e4ed3b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-2sb27" [e280c898-9192-435c-b4ce-4eb00e4ed3b7] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00670683s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-5t5gw" [bfec9053-cb2d-42b0-91bf-eb47c527a699] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013366916s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210813002926-676638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210813002926-676638 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-qkwlq" [ff7f90cf-9ee8-4992-ba03-d96c16c75540] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-qkwlq" [ff7f90cf-9ee8-4992-ba03-d96c16c75540] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006404617s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210813002926-676638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210813002926-676638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210813002926-676638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210813002925-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=crio

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210813002925-676638 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=crio: (47.079011651s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210813002925-676638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210813002925-676638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-w8m2z" [cac5dc20-7354-48d9-a0a5-1d6851a30ade] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-w8m2z" [cac5dc20-7354-48d9-a0a5-1d6851a30ade] Running
E0813 00:46:56.455622  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
E0813 00:46:56.460913  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
E0813 00:46:56.471242  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
E0813 00:46:56.491606  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
E0813 00:46:56.531868  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
E0813 00:46:56.612212  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
E0813 00:46:56.772621  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
E0813 00:46:57.093197  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.006781635s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210813002925-676638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210813002925-676638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0813 00:46:57.734288  676638 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-crio-12230-673681-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813002925-676638/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210813002925-676638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (24/250)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210813003109-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210813003109-676638
--- SKIP: TestStartStop/group/disable-driver-mounts (0.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as crio container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210813002925-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210813002925-676638
--- SKIP: TestNetworkPlugins/group/kubenet (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210813002925-676638" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20210813002925-676638
--- SKIP: TestNetworkPlugins/group/flannel (0.58s)

                                                
                                    
Copied to clipboard