Test Report: Docker_Linux 9627

                    
                      28044cddb5b825dc6c4e07ed62c91708294461e9
                    
                

Test fail (12/209)

x
+
TestAddons/parallel/Registry (36.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:199: registry stabilized in 22.218473ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:201: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:333: "registry-wbts4" [8892c84c-3938-4ded-a15a-9333012648e2] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:201: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.035298841s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:204: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:333: "registry-proxy-js2w2" [8ab364e7-de9a-4498-a665-85e8ad7001b5] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:204: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009668878s
addons_test.go:209: (dbg) Run:  kubectl --context addons-20201109132301-342799 delete po -l run=registry-test --now
addons_test.go:214: (dbg) Run:  kubectl --context addons-20201109132301-342799 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:214: (dbg) Non-zero exit: kubectl --context addons-20201109132301-342799 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 128 (4.628503361s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	pod default/registry-test terminated (ContainerCannotRun)
	OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/minikube/google_application_credentials.json\\\" to rootfs \\\"/var/lib/docker/overlay2/8954344407b890efb9e33682a8716dbd7d5922081f9d9979967ea660eb497bf8/merged\\\" at \\\"/google-app-creds.json\\\" caused \\\"stat /var/lib/minikube/google_application_credentials.json: no such file or directory\\\"\"": unknown

                                                
                                                
** /stderr **
addons_test.go:216: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-20201109132301-342799 run --rm registry-test --restart=Never --image=busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 128
addons_test.go:220: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201109132301-342799 ip
2020/11/09 13:25:53 [DEBUG] GET http://192.168.49.16:5000
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable registry --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable registry --alsologtostderr -v=1: (4.219533168s)
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect addons-20201109132301-342799
helpers_test.go:229: (dbg) docker inspect addons-20201109132301-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05568404e6b02c4d5394dbeef4d25d91e458764af41cb561be218af62b27721b",
	        "Created": "2020-11-09T21:23:03.102320439Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 359503,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:23:03.660045221Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/05568404e6b02c4d5394dbeef4d25d91e458764af41cb561be218af62b27721b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05568404e6b02c4d5394dbeef4d25d91e458764af41cb561be218af62b27721b/hostname",
	        "HostsPath": "/var/lib/docker/containers/05568404e6b02c4d5394dbeef4d25d91e458764af41cb561be218af62b27721b/hosts",
	        "LogPath": "/var/lib/docker/containers/05568404e6b02c4d5394dbeef4d25d91e458764af41cb561be218af62b27721b/05568404e6b02c4d5394dbeef4d25d91e458764af41cb561be218af62b27721b-json.log",
	        "Name": "/addons-20201109132301-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20201109132301-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20201109132301-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2726297600,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dab5eef1d484caf5ab575eabba4e458ffcac8086435d8bb3adc7e62f5dfa3712-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dab5eef1d484caf5ab575eabba4e458ffcac8086435d8bb3adc7e62f5dfa3712/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dab5eef1d484caf5ab575eabba4e458ffcac8086435d8bb3adc7e62f5dfa3712/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dab5eef1d484caf5ab575eabba4e458ffcac8086435d8bb3adc7e62f5dfa3712/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20201109132301-342799",
	                "Source": "/var/lib/docker/volumes/addons-20201109132301-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20201109132301-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20201109132301-342799",
	                "name.minikube.sigs.k8s.io": "addons-20201109132301-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a351502a35abc3394fd8a69013af684e9f5543df76a2a09d0059abd567bf85d8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a351502a35ab",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20201109132301-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "05568404e6b0"
	                    ],
	                    "NetworkID": "dba738592c7b0e3b1aadec675a6342fa7fe73c937a2ea48aacf065c2fb880e96",
	                    "EndpointID": "f2a2450cc0d221e9495a0c83db22099bd0ad19412a5fa7a2ed3fa0c54768bc59",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20201109132301-342799 -n addons-20201109132301-342799
helpers_test.go:233: (dbg) Done: out/minikube-linux-amd64 status --format={{.Host}} -p addons-20201109132301-342799 -n addons-20201109132301-342799: (2.740199169s)
helpers_test.go:238: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201109132301-342799 logs -n 25

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p addons-20201109132301-342799 logs -n 25: (12.412906759s)
helpers_test.go:246: TestAddons/parallel/Registry logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:23:04 UTC, end at Mon 2020-11-09 21:26:04 UTC. --
	* Nov 09 21:24:07 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:24:07.793642306Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:24:08 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:24:08.681175108Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:24:08 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:24:08.794992148Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:24:08 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:24:08.893914906Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:24:09 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:24:09.970462409Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:24:12 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:24:12.673423007Z" level=error msg="stream copy error: reading from a closed fifo"
	* Nov 09 21:24:12 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:24:12.673586647Z" level=error msg="stream copy error: reading from a closed fifo"
	* Nov 09 21:24:13 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:24:13.027814887Z" level=error msg="2e04aca88e96f715bb73e3f009c8d9fc9dad16a022687d685f4f8d224ec5cf05 cleanup: failed to delete container from containerd: no such container"
	* Nov 09 21:24:13 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:24:13.027895568Z" level=error msg="Handler for POST /v1.40/containers/2e04aca88e96f715bb73e3f009c8d9fc9dad16a022687d685f4f8d224ec5cf05/start returned error: OCI runtime create failed: container_linux.go:349: starting container process caused \"process_linux.go:449: container init caused \\\"\\\"\": unknown"
	* Nov 09 21:24:22 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:24:22.982205770Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:24:29 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:24:29.858486774Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:25:33 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:25:33.107107782Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Nov 09 21:25:33 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:25:33.193785235Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:25:52 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:25:52.313473061Z" level=error msg="stream copy error: reading from a closed fifo"
	* Nov 09 21:25:52 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:25:52.357738088Z" level=error msg="92e456ec081489aeecf1ebb5cdd01d0abc780263a56cb340a29420ad6cbc7e68 cleanup: failed to delete container from containerd: no such container"
	* Nov 09 21:25:52 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:25:52.357819444Z" level=error msg="Handler for POST /v1.40/containers/92e456ec081489aeecf1ebb5cdd01d0abc780263a56cb340a29420ad6cbc7e68/start returned error: OCI runtime create failed: container_linux.go:349: starting container process caused \"process_linux.go:449: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/lib/minikube/google_application_credentials.json\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/overlay2/8954344407b890efb9e33682a8716dbd7d5922081f9d9979967ea660eb497bf8/merged\\\\\\\" at \\\\\\\"/google-app-creds.json\\\\\\\" caused \\\\\\\"stat /var/lib/minikube/google_application_credentials.json: no such file or directory\\\\\\\"\\\"\": unknown"
	* Nov 09 21:25:56 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:25:56.786252891Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:25:56 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:25:56.885992637Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:25:57 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:25:57.688069873Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:25:57 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:25:57.689618430Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:25:57 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:25:57.892854769Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:26:02 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:26:02.987937730Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:26:03 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:26:03.068908023Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:26:03 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:26:03.168036276Z" level=info msg="Container f70c7f7a07470763ab76d8e9e5fc48f3e49ae61ef5cba6efcc9c40b8d45199b5 failed to exit within 30 seconds of signal 15 - using the force"
	* Nov 09 21:26:03 addons-20201109132301-342799 dockerd[669]: time="2020-11-09T21:26:03.591803040Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                             CREATED              STATE               NAME                         ATTEMPT             POD ID
	* 92e456ec08148       busybox@sha256:a9286defaba7b3a519d585ba0e37d0b2cbee74ebfe590960b0b1d6a5e97d1e1d                                                   12 seconds ago       Created             registry-test                0                   a5431e532e73a
	* c7a63228099cd       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1                                                   21 seconds ago       Running             busybox                      0                   b29319af12bee
	* d02981da985e5       quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5                              27 seconds ago       Running             liveness-probe               0                   7428ca744648e
	* 338301c40c8f4       quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373                            27 seconds ago       Running             packageserver                1                   942da155b3f30
	* d5bca5462f58a       quay.io/operator-framework/upstream-community-operators@sha256:abaa54d83d2825c7d2bc9367edbc1a3707df88e43ded36ff441398f23f030b6e   31 seconds ago       Running             registry-server              0                   ec17478e3c745
	* 70d50f434d6d6       quay.io/k8scsi/hostpathplugin@sha256:aa223f9df8c1d477a9f2a4a2a7d104561e6d365e54671aacbc770dffcc0683ad                             52 seconds ago       Running             hostpath                     0                   7428ca744648e
	* f70c7f7a07470       quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373                            53 seconds ago       Exited              packageserver                0                   cd8a2d6aedd22
	* cedf981027bca       k8s.gcr.io/ingress-nginx/controller@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f                       54 seconds ago       Running             controller                   0                   d2b360d3b9356
	* dedb03463031b       quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373                            54 seconds ago       Exited              packageserver                0                   942da155b3f30
	* c861c8efd87bb       quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373                            54 seconds ago       Running             packageserver                0                   aa3d3a549cf70
	* 5179ea69fcca1       gcr.io/k8s-staging-sig-storage/csi-provisioner@sha256:8f36191970a82677ffe222007b08395dd7af0a5bb5b93db0e82523b43de2bfb2            About a minute ago   Running             csi-provisioner              0                   1a919ea81e8a7
	* 089fd703ac9bf       quay.io/k8scsi/csi-snapshotter@sha256:35ead85dd09aa8cc612fdb598d4e0e2f048bef816f1b74df5eeab67cd21b10aa                            About a minute ago   Running             csi-snapshotter              0                   d126e5bfe58b1
	* 0a024f7ce677d       quay.io/k8scsi/csi-attacher@sha256:8fcb9472310dd424c4da8ee06ff200b5e6f091dff39a079e470599e4d0dcf328                               About a minute ago   Running             csi-attacher                 0                   6e38746223fe5
	* 517c8ad369300       gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da              About a minute ago   Exited              registry-proxy               0                   3ef068e4107c3
	* 91ef93e61fd4f       quay.io/k8scsi/csi-resizer@sha256:75ad39004ac49267981c9cb3323a7f73f0b203e1c181117363bf215e10144e8a                                About a minute ago   Running             csi-resizer                  0                   2babb38313a8a
	* 3150d3c07e04c       gcr.io/kubernetes-helm/tiller@sha256:6003775d503546087266eda39418d221f9afb5ccfe35f637c32a1161619a3f9c                             About a minute ago   Running             tiller                       0                   fa8c77396a9c0
	* df950196bbe97       quay.io/k8scsi/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309                  About a minute ago   Running             node-driver-registrar        0                   7428ca744648e
	* 34ce9cf34424b       4d4f44df9f905                                                                                                                     About a minute ago   Exited              patch                        2                   32b78f453a386
	* 293c4db1c11a5       quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373                            About a minute ago   Running             catalog-operator             0                   82d805fac726b
	* 5dff05bbc9e94       registry.hub.docker.com/library/registry@sha256:8be26f81ffea54106bae012c6f349df70f4d5e7e2ec01b143c46e2c03b9e551d                  About a minute ago   Exited              registry                     0                   f02bfa54d9106
	* 4560c5cb8d47b       k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892                           About a minute ago   Running             metrics-server               0                   c1d619218b777
	* d932ee1dec2f3       quay.io/operator-framework/olm@sha256:0d15ffb5d10a176ef6e831d7865f98d51255ea5b0d16403618c94a004d049373                            About a minute ago   Running             olm-operator                 0                   d853037b41e22
	* fb8007c362513       jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689                              About a minute ago   Exited              create                       0                   f7d55bcd63c71
	* ec0ccc76e41a9       jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689                              About a minute ago   Exited              patch                        0                   4e0406059a26d
	* 4c1c3ed986eee       jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7                              About a minute ago   Exited              create                       0                   a5b40655675cc
	* d0ee375969ff4       bad58561c4be7                                                                                                                     2 minutes ago        Running             storage-provisioner          0                   424442e55015d
	* 4df414df190c1       gcr.io/k8s-staging-csi/snapshot-controller@sha256:9a44a869d23e42f5d7954c9a5c9ec1a76a0a5d6f23fce5e68e1232a017d3d38c                2 minutes ago        Running             volume-snapshot-controller   0                   573db206850f7
	* 5a9ac29e96e41       bfe3a36ebd252                                                                                                                     2 minutes ago        Running             coredns                      0                   74f0243cb0bc6
	* f6b75defdecc6       d373dd5a8593a                                                                                                                     2 minutes ago        Running             kube-proxy                   0                   de0defa2a1c08
	* 6a5979ff16b30       607331163122e                                                                                                                     2 minutes ago        Running             kube-apiserver               0                   871adcb247aa7
	* ddd078525f2e4       8603821e1a7a5                                                                                                                     2 minutes ago        Running             kube-controller-manager      0                   5845cd99df907
	* 9e0d25030346b       2f32d66b884f8                                                                                                                     2 minutes ago        Running             kube-scheduler               0                   df5c151cc2bec
	* 338b62eadcf17       0369cf4303ffd                                                                                                                     2 minutes ago        Running             etcd                         0                   83d131d562144
	* 
	* ==> coredns [5a9ac29e96e4] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               addons-20201109132301-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=addons-20201109132301-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=addons-20201109132301-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_23_33_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	*                     topology.hostpath.csi/node=addons-20201109132301-342799
	* Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20201109132301-342799"}
	*                     kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:23:30 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  addons-20201109132301-342799
	*   AcquireTime:     <unset>
	*   RenewTime:       Mon, 09 Nov 2020 21:25:58 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:25:48 +0000   Mon, 09 Nov 2020 21:23:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:25:48 +0000   Mon, 09 Nov 2020 21:23:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:25:48 +0000   Mon, 09 Nov 2020 21:23:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:25:48 +0000   Mon, 09 Nov 2020 21:23:45 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.49.16
	*   Hostname:    addons-20201109132301-342799
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 a4c2a4e521d240fea258754d777b87c4
	*   System UUID:                892f015f-7314-4b78-be60-ccca57970c90
	*   Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.13
	*   Kubelet Version:            v1.19.2
	*   Kube-Proxy Version:         v1.19.2
	* Non-terminated Pods:          (27 in total)
	*   Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	*   default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	*   default                     task-pv-pod                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	*   gcp-auth                    gcp-auth-74f9689fd7-h8p7p                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	*   kube-system                 coredns-f9fd979d6-6sj4j                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m20s
	*   kube-system                 csi-hostpath-attacher-0                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	*   kube-system                 csi-hostpath-provisioner-0                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	*   kube-system                 csi-hostpath-resizer-0                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	*   kube-system                 csi-hostpath-snapshotter-0                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	*   kube-system                 csi-hostpathplugin-0                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	*   kube-system                 etcd-addons-20201109132301-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	*   kube-system                 ingress-nginx-controller-9bc9f8988-z6j2k                100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m27s
	*   kube-system                 kube-apiserver-addons-20201109132301-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	*   kube-system                 kube-controller-manager-addons-20201109132301-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	*   kube-system                 kube-proxy-2f6bk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	*   kube-system                 kube-scheduler-addons-20201109132301-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	*   kube-system                 metrics-server-d9b576748-kl5sr                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	*   kube-system                 registry-proxy-js2w2                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	*   kube-system                 registry-wbts4                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	*   kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	*   kube-system                 tiller-deploy-565984b594-frbdj                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	*   kube-system                 volume-snapshot-controller-0                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	*   olm                         catalog-operator-69c9c9d9bd-d7cnl                       10m (0%)      0 (0%)      80Mi (0%)        0 (0%)         2m22s
	*   olm                         olm-operator-69fc5f5699-n6pwr                           10m (0%)      0 (0%)      160Mi (0%)       0 (0%)         2m23s
	*   olm                         operatorhubio-catalog-xhkpn                             10m (0%)      100m (1%)   50Mi (0%)        100Mi (0%)     106s
	*   olm                         packageserver-66bb6d45f7-8ct82                          10m (0%)      0 (0%)      50Mi (0%)        0 (0%)         109s
	*   olm                         packageserver-6bcb8dd987-6ppbc                          10m (0%)      0 (0%)      50Mi (0%)        0 (0%)         110s
	*   olm                         packageserver-6bcb8dd987-lxv8s                          10m (0%)      0 (0%)      50Mi (0%)        0 (0%)         111s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                810m (10%)  100m (1%)
	*   memory             600Mi (1%)  270Mi (0%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                    From        Message
	*   ----    ------                   ----                   ----        -------
	*   Normal  NodeHasSufficientMemory  2m44s (x5 over 2m44s)  kubelet     Node addons-20201109132301-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2m44s (x4 over 2m44s)  kubelet     Node addons-20201109132301-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2m44s (x4 over 2m44s)  kubelet     Node addons-20201109132301-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 2m33s                  kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  2m33s                  kubelet     Node addons-20201109132301-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2m33s                  kubelet     Node addons-20201109132301-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2m33s                  kubelet     Node addons-20201109132301-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             2m33s                  kubelet     Node addons-20201109132301-342799 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  2m32s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 2m24s                  kube-proxy  Starting kube-proxy.
	*   Normal  NodeReady                2m22s                  kubelet     Node addons-20201109132301-342799 status is now: NodeReady
	* 
	* ==> dmesg <==
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 0a fe 8d 22 fe 64 08 06        .........".d..
	* [Nov 9 21:20] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.259017] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.699115] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:22] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev cni0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 18 e6 be 5d 26 08 06        ......V...]&..
	* [  +0.000011] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 56 18 e6 be 5d 26 08 06        ......V...]&..
	* [  +0.001476] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 26 2e b7 a0 21 84 08 06        ......&...!...
	* [  +0.000009] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 26 2e b7 a0 21 84 08 06        ......&...!...
	* [  +4.233543] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 66 aa 8e 8e 7d d3 08 06        ......f...}...
	* [  +0.000007] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	* [  +0.000001] ll header: 00000000: ff ff ff ff ff ff 66 aa 8e 8e 7d d3 08 06        ......f...}...
	* [  +0.280683] IPv4: martian source 10.85.0.3 from 10.85.0.3, on dev eth0
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 6d 23 60 52 db 08 06        .......m#`R...
	* [ +11.187645] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth42837a93
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 36 2e b8 98 4d 23 08 06        ......6...M#..
	* [ +13.774367] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff ca 28 f7 29 84 fa 08 06        .......(.)....
	* [ +12.851572] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd2fa70cc
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 26 58 58 bd 5b 5c 08 06        ......&XX.[\..
	* [Nov 9 21:23] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [338b62eadcf1] <==
	* 2020-11-09 21:26:01.067791 W | etcdserver: read-only range request "key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3e8ff82ba4b\" " with result "range_response_count:1 size:823" took too long (293.930269ms) to execute
	* 2020-11-09 21:26:01.069157 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9920" took too long (104.308808ms) to execute
	* 2020-11-09 21:26:01.275253 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (106.784579ms) to execute
	* 2020-11-09 21:26:01.359143 W | etcdserver: read-only range request "key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb53b04991\" " with result "range_response_count:1 size:795" took too long (185.452957ms) to execute
	* 2020-11-09 21:26:01.380492 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:620" took too long (202.416109ms) to execute
	* 2020-11-09 21:26:01.564105 W | etcdserver: request "header:<ID:12712383767199948166 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb53b04991\" mod_revision:729 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb53b04991\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb53b04991\" > >>" with result "size:18" took too long (203.136444ms) to execute
	* 2020-11-09 21:26:01.566050 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/\" range_end:\"/registry/operators.coreos.com/operatorgroups0\" " with result "range_response_count:2 size:2446" took too long (392.352389ms) to execute
	* 2020-11-09 21:26:01.785263 W | etcdserver: request "header:<ID:12712383767199948172 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb656994a8\" mod_revision:813 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb656994a8\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb656994a8\" > >>" with result "size:18" took too long (127.326282ms) to execute
	* 2020-11-09 21:26:01.971283 W | etcdserver: read-only range request "key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3ef5c8b0a3d\" " with result "range_response_count:1 size:829" took too long (112.162564ms) to execute
	* 2020-11-09 21:26:01.973099 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/clusterserviceversions/olm/packageserver\" " with result "range_response_count:1 size:9920" took too long (112.628501ms) to execute
	* 2020-11-09 21:26:02.063031 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (102.144654ms) to execute
	* 2020-11-09 21:26:02.175066 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:7" took too long (114.590342ms) to execute
	* 2020-11-09 21:26:02.181783 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/olm/\" range_end:\"/registry/operators.coreos.com/operatorgroups/olm0\" " with result "range_response_count:1 size:1332" took too long (109.082645ms) to execute
	* 2020-11-09 21:26:02.266741 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:610" took too long (191.285101ms) to execute
	* 2020-11-09 21:26:02.270940 W | etcdserver: read-only range request "key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3f8c03a8256\" " with result "range_response_count:1 size:857" took too long (198.96592ms) to execute
	* 2020-11-09 21:26:02.374817 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/\" range_end:\"/registry/operators.coreos.com/operatorgroups0\" " with result "range_response_count:2 size:2446" took too long (108.57116ms) to execute
	* 2020-11-09 21:26:02.578528 W | etcdserver: request "header:<ID:12712383767199948192 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3f8c551cd70\" mod_revision:1110 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3f8c551cd70\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3f8c551cd70\" > >>" with result "size:18" took too long (192.131347ms) to execute
	* 2020-11-09 21:26:02.676434 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:2 size:5355" took too long (110.36684ms) to execute
	* 2020-11-09 21:26:03.086578 W | etcdserver: request "header:<ID:12712383767199948217 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-certs-create-6ffdw.1645f3ee6b1bf292\" mod_revision:852 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-certs-create-6ffdw.1645f3ee6b1bf292\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-certs-create-6ffdw.1645f3ee6b1bf292\" > >>" with result "size:18" took too long (109.298167ms) to execute
	* 2020-11-09 21:26:03.116560 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:20 size:88558" took too long (550.815058ms) to execute
	* 2020-11-09 21:26:04.808865 W | etcdserver: read-only range request "key:\"/registry/pods/\" range_end:\"/registry/pods0\" " with result "range_response_count:29 size:132038" took too long (127.669266ms) to execute
	* 2020-11-09 21:26:07.395129 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" " with result "range_response_count:221 size:180081" took too long (2.525012011s) to execute
	* 2020-11-09 21:26:09.860457 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/\" range_end:\"/registry/operators.coreos.com/operatorgroups0\" " with result "range_response_count:2 size:2446" took too long (180.444172ms) to execute
	* 2020-11-09 21:26:10.383059 W | etcdserver: read-only range request "key:\"/registry/operators.coreos.com/operatorgroups/\" range_end:\"/registry/operators.coreos.com/operatorgroups0\" " with result "range_response_count:2 size:2446" took too long (101.331378ms) to execute
	* 2020-11-09 21:26:10.859475 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  21:26:11 up  1:08,  0 users,  load average: 2.69, 4.18, 6.16
	* Linux addons-20201109132301-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [6a5979ff16b3] <==
	* I1109 21:25:54.478866       1 trace.go:205] Trace[2027621802]: "List etcd3" key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: (09-Nov-2020 21:25:53.667) (total time: 810ms):
	* Trace[2027621802]: [810.90739ms] [810.90739ms] END
	* I1109 21:25:54.482503       1 trace.go:205] Trace[166959150]: "List" url:/api/v1/namespaces/kube-system/pods,user-agent:kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a,client:192.168.49.1 (09-Nov-2020 21:25:53.667) (total time: 814ms):
	* Trace[166959150]: ---"Listing from storage done" 810ms (21:25:00.478)
	* Trace[166959150]: [814.558544ms] [814.558544ms] END
	* I1109 21:25:59.073542       1 trace.go:205] Trace[89682460]: "Delete" url:/api/v1/namespaces/gcp-auth/secrets/default-token-n7txb,user-agent:kube-controller-manager/v1.19.2 (linux/amd64) kubernetes/f574309/tokens-controller,client:192.168.49.16 (09-Nov-2020 21:25:58.268) (total time: 805ms):
	* Trace[89682460]: ---"Object deleted from database" 805ms (21:25:00.073)
	* Trace[89682460]: [805.326949ms] [805.326949ms] END
	* I1109 21:25:59.881290       1 client.go:360] parsed scheme: "passthrough"
	* I1109 21:25:59.881393       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I1109 21:25:59.881407       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I1109 21:26:00.673717       1 trace.go:205] Trace[1335460069]: "List etcd3" key:/events/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: (09-Nov-2020 21:25:58.363) (total time: 2309ms):
	* Trace[1335460069]: [2.309800445s] [2.309800445s] END
	* I1109 21:26:03.160034       1 trace.go:205] Trace[819747574]: "List etcd3" key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: (09-Nov-2020 21:26:02.564) (total time: 595ms):
	* Trace[819747574]: [595.770632ms] [595.770632ms] END
	* I1109 21:26:03.165560       1 trace.go:205] Trace[2145046238]: "List" url:/api/v1/namespaces/kube-system/pods,user-agent:kubectl/v1.19.3 (linux/amd64) kubernetes/1e11e4a,client:192.168.49.1 (09-Nov-2020 21:26:02.564) (total time: 599ms):
	* Trace[2145046238]: ---"Listing from storage done" 595ms (21:26:00.160)
	* Trace[2145046238]: [599.332158ms] [599.332158ms] END
	* I1109 21:26:03.476565       1 trace.go:205] Trace[157952960]: "Delete" url:/apis/events.k8s.io/v1/namespaces/gcp-auth/events (09-Nov-2020 21:25:58.363) (total time: 5112ms):
	* Trace[157952960]: [5.112782061s] [5.112782061s] END
	* I1109 21:26:07.400313       1 trace.go:205] Trace[1448069030]: "List etcd3" key:/events,resourceVersion:,resourceVersionMatch:,limit:0,continue: (09-Nov-2020 21:26:04.869) (total time: 2530ms):
	* Trace[1448069030]: [2.530825895s] [2.530825895s] END
	* I1109 21:26:07.401038       1 trace.go:205] Trace[395661830]: "List" url:/api/v1/events,user-agent:kubectl/v1.19.2 (linux/amd64) kubernetes/f574309,client:127.0.0.1 (09-Nov-2020 21:26:04.869) (total time: 2531ms):
	* Trace[395661830]: ---"Listing from storage done" 2530ms (21:26:00.400)
	* Trace[395661830]: [2.53159701s] [2.53159701s] END
	* 
	* ==> kube-controller-manager [ddd078525f2e] <==
	* E1109 21:24:15.598447       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	* I1109 21:24:15.759379       1 event.go:291] "Event occurred" object="olm/packageserver" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set packageserver-6bcb8dd987 to 2"
	* W1109 21:24:16.174829       1 garbagecollector.go:642] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	* E1109 21:24:16.258889       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	* I1109 21:24:16.789074       1 event.go:291] "Event occurred" object="olm/packageserver-6bcb8dd987" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: packageserver-6bcb8dd987-lxv8s"
	* I1109 21:24:17.293896       1 event.go:291] "Event occurred" object="olm/packageserver" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set packageserver-66bb6d45f7 to 1"
	* E1109 21:24:17.559449       1 memcache.go:196] couldn't get resource list for packages.operators.coreos.com/v1: the server could not find the requested resource
	* E1109 21:24:17.759083       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	* I1109 21:24:17.760329       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:24:17.760393       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:24:17.813212       1 event.go:291] "Event occurred" object="olm/packageserver-6bcb8dd987" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: packageserver-6bcb8dd987-6ppbc"
	* I1109 21:24:18.362300       1 event.go:291] "Event occurred" object="olm/packageserver-66bb6d45f7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: packageserver-66bb6d45f7-8ct82"
	* I1109 21:24:26.805943       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	* E1109 21:24:41.566834       1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request
	* I1109 21:24:49.510631       1 request.go:645] Throttling request took 1.04838045s, request: GET:https://192.168.49.16:8443/apis/operators.coreos.com/v1?timeout=32s
	* W1109 21:24:50.462247       1 garbagecollector.go:642] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]
	* E1109 21:25:12.168783       1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request
	* I1109 21:25:22.257737       1 request.go:645] Throttling request took 1.091911962s, request: GET:https://192.168.49.16:8443/apis/networking.k8s.io/v1?timeout=32s
	* W1109 21:25:23.177094       1 garbagecollector.go:642] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]
	* I1109 21:25:39.187410       1 event.go:291] "Event occurred" object="default/hpvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	* I1109 21:25:39.187451       1 event.go:291] "Event occurred" object="default/hpvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	* I1109 21:25:39.588185       1 reconciler.go:275] attacherDetacher.AttachVolume started for volume "pvc-0cf96e21-5e74-4972-a50d-167015748f6f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^1d208636-22d2-11eb-8601-0242ac11000d") from node "addons-20201109132301-342799" 
	* I1109 21:25:39.627725       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume "pvc-0cf96e21-5e74-4972-a50d-167015748f6f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^1d208636-22d2-11eb-8601-0242ac11000d") from node "addons-20201109132301-342799" 
	* I1109 21:25:39.627881       1 event.go:291] "Event occurred" object="default/task-pv-pod" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-0cf96e21-5e74-4972-a50d-167015748f6f\" "
	* E1109 21:25:42.771490       1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request
	* 
	* ==> kube-proxy [f6b75defdecc] <==
	* I1109 21:23:43.568795       1 node.go:136] Successfully retrieved node IP: 192.168.49.16
	* I1109 21:23:43.568966       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.16), assume IPv4 operation
	* W1109 21:23:43.781526       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:23:43.781623       1 server_others.go:186] Using iptables Proxier.
	* W1109 21:23:43.781642       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1109 21:23:43.781647       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1109 21:23:43.782066       1 server.go:650] Version: v1.19.2
	* I1109 21:23:43.785657       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:23:43.785796       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:23:43.785876       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:23:43.786077       1 config.go:315] Starting service config controller
	* I1109 21:23:43.786100       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:23:43.786129       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:23:43.786138       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:23:43.886340       1 shared_informer.go:247] Caches are synced for service config 
	* I1109 21:23:43.886398       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-scheduler [9e0d25030346] <==
	* I1109 21:23:30.180222       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:23:30.180259       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:23:30.184042       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1109 21:23:30.258507       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:23:30.258532       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:23:30.258980       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E1109 21:23:30.261857       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:23:30.262018       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:23:30.262454       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:23:30.262684       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:23:30.262819       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:23:30.263078       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:23:30.264112       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:23:30.264182       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:23:30.264348       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:23:30.264538       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:23:30.264593       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:23:30.264641       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:23:30.266387       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:23:31.142573       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:23:31.233302       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:23:31.316895       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:23:31.358734       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:23:31.478391       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I1109 21:23:34.758723       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:23:04 UTC, end at Mon 2020-11-09 21:26:12 UTC. --
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: W1109 21:26:04.465263    2533 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-66bb6d45f7-8ct82 through plugin: invalid network status for
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:04.567531    2533 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 517c8ad3693001366728f69278940bfa3275acc5af0d1079b455487ee9dd82ed
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: E1109 21:26:04.574390    2533 remote_runtime.go:329] ContainerStatus "517c8ad3693001366728f69278940bfa3275acc5af0d1079b455487ee9dd82ed" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 517c8ad3693001366728f69278940bfa3275acc5af0d1079b455487ee9dd82ed
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: W1109 21:26:04.574443    2533 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker 517c8ad3693001366728f69278940bfa3275acc5af0d1079b455487ee9dd82ed}): failed to get container status "517c8ad3693001366728f69278940bfa3275acc5af0d1079b455487ee9dd82ed": rpc error: code = Unknown desc = Error: No such container: 517c8ad3693001366728f69278940bfa3275acc5af0d1079b455487ee9dd82ed
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:04.627007    2533 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5dff05bbc9e9433decf66c788969fa7dd2bc74c15d0a7742adce1fb06bcbd43d
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: W1109 21:26:04.668840    2533 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/task-pv-pod through plugin: invalid network status for
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:04.675383    2533 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-74rlq" (UniqueName: "kubernetes.io/secret/8ab364e7-de9a-4498-a665-85e8ad7001b5-default-token-74rlq") pod "8ab364e7-de9a-4498-a665-85e8ad7001b5" (UID: "8ab364e7-de9a-4498-a665-85e8ad7001b5")
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:04.757780    2533 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab364e7-de9a-4498-a665-85e8ad7001b5-default-token-74rlq" (OuterVolumeSpecName: "default-token-74rlq") pod "8ab364e7-de9a-4498-a665-85e8ad7001b5" (UID: "8ab364e7-de9a-4498-a665-85e8ad7001b5"). InnerVolumeSpecName "default-token-74rlq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:04.759739    2533 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5dff05bbc9e9433decf66c788969fa7dd2bc74c15d0a7742adce1fb06bcbd43d
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: E1109 21:26:04.760959    2533 remote_runtime.go:329] ContainerStatus "5dff05bbc9e9433decf66c788969fa7dd2bc74c15d0a7742adce1fb06bcbd43d" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 5dff05bbc9e9433decf66c788969fa7dd2bc74c15d0a7742adce1fb06bcbd43d
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: W1109 21:26:04.761008    2533 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker 5dff05bbc9e9433decf66c788969fa7dd2bc74c15d0a7742adce1fb06bcbd43d}): failed to get container status "5dff05bbc9e9433decf66c788969fa7dd2bc74c15d0a7742adce1fb06bcbd43d": rpc error: code = Unknown desc = Error: No such container: 5dff05bbc9e9433decf66c788969fa7dd2bc74c15d0a7742adce1fb06bcbd43d
	* Nov 09 21:26:04 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:04.775902    2533 reconciler.go:319] Volume detached for volume "default-token-74rlq" (UniqueName: "kubernetes.io/secret/8ab364e7-de9a-4498-a665-85e8ad7001b5-default-token-74rlq") on node "addons-20201109132301-342799" DevicePath ""
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.683915    2533 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-74rlq" (UniqueName: "kubernetes.io/secret/8892c84c-3938-4ded-a15a-9333012648e2-default-token-74rlq") pod "8892c84c-3938-4ded-a15a-9333012648e2" (UID: "8892c84c-3938-4ded-a15a-9333012648e2")
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.684160    2533 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-n7txb" (UniqueName: "kubernetes.io/secret/3b43bf3b-441d-4814-99a9-da234703dfae-default-token-n7txb") pod "3b43bf3b-441d-4814-99a9-da234703dfae" (UID: "3b43bf3b-441d-4814-99a9-da234703dfae")
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.684203    2533 reconciler.go:196] operationExecutor.UnmountVolume started for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/3b43bf3b-441d-4814-99a9-da234703dfae-gcp-project") pod "3b43bf3b-441d-4814-99a9-da234703dfae" (UID: "3b43bf3b-441d-4814-99a9-da234703dfae")
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.684303    2533 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3b43bf3b-441d-4814-99a9-da234703dfae-webhook-certs") pod "3b43bf3b-441d-4814-99a9-da234703dfae" (UID: "3b43bf3b-441d-4814-99a9-da234703dfae")
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.684408    2533 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b43bf3b-441d-4814-99a9-da234703dfae-gcp-project" (OuterVolumeSpecName: "gcp-project") pod "3b43bf3b-441d-4814-99a9-da234703dfae" (UID: "3b43bf3b-441d-4814-99a9-da234703dfae"). InnerVolumeSpecName "gcp-project". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.760591    2533 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b43bf3b-441d-4814-99a9-da234703dfae-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "3b43bf3b-441d-4814-99a9-da234703dfae" (UID: "3b43bf3b-441d-4814-99a9-da234703dfae"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue ""
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.760746    2533 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b43bf3b-441d-4814-99a9-da234703dfae-default-token-n7txb" (OuterVolumeSpecName: "default-token-n7txb") pod "3b43bf3b-441d-4814-99a9-da234703dfae" (UID: "3b43bf3b-441d-4814-99a9-da234703dfae"). InnerVolumeSpecName "default-token-n7txb". PluginName "kubernetes.io/secret", VolumeGidValue ""
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.784822    2533 reconciler.go:319] Volume detached for volume "default-token-n7txb" (UniqueName: "kubernetes.io/secret/3b43bf3b-441d-4814-99a9-da234703dfae-default-token-n7txb") on node "addons-20201109132301-342799" DevicePath ""
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.784872    2533 reconciler.go:319] Volume detached for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/3b43bf3b-441d-4814-99a9-da234703dfae-gcp-project") on node "addons-20201109132301-342799" DevicePath ""
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.784896    2533 reconciler.go:319] Volume detached for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3b43bf3b-441d-4814-99a9-da234703dfae-webhook-certs") on node "addons-20201109132301-342799" DevicePath ""
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.859398    2533 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8892c84c-3938-4ded-a15a-9333012648e2-default-token-74rlq" (OuterVolumeSpecName: "default-token-74rlq") pod "8892c84c-3938-4ded-a15a-9333012648e2" (UID: "8892c84c-3938-4ded-a15a-9333012648e2"). InnerVolumeSpecName "default-token-74rlq". PluginName "kubernetes.io/secret", VolumeGidValue ""
	* Nov 09 21:26:06 addons-20201109132301-342799 kubelet[2533]: I1109 21:26:06.885279    2533 reconciler.go:319] Volume detached for volume "default-token-74rlq" (UniqueName: "kubernetes.io/secret/8892c84c-3938-4ded-a15a-9333012648e2-default-token-74rlq") on node "addons-20201109132301-342799" DevicePath ""
	* Nov 09 21:26:07 addons-20201109132301-342799 kubelet[2533]: W1109 21:26:07.992884    2533 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for olm/packageserver-6bcb8dd987-lxv8s through plugin: invalid network status for
	* 
	* ==> storage-provisioner [d0ee375969ff] <==
	* I1109 21:24:04.372013       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1109 21:24:04.460772       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1109 21:24:04.461233       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_addons-20201109132301-342799_388244a2-344c-4577-a94d-661729601e1e!
	* I1109 21:24:04.461680       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b8dd61cc-7749-4a36-b1c2-b9b3b8a8ec0e", APIVersion:"v1", ResourceVersion:"828", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20201109132301-342799_388244a2-344c-4577-a94d-661729601e1e became leader
	* I1109 21:24:04.561803       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_addons-20201109132301-342799_388244a2-344c-4577-a94d-661729601e1e!

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:26:11.014857  369719 out.go:286] unable to execute * 2020-11-09 21:26:01.564105 W | etcdserver: request "header:<ID:12712383767199948166 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb53b04991\" mod_revision:729 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb53b04991\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb53b04991\" > >>" with result "size:18" took too long (203.136444ms) to execute
	: html/template:* 2020-11-09 21:26:01.564105 W | etcdserver: request "header:<ID:12712383767199948166 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb53b04991\" mod_revision:729 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb53b04991\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb53b04991\" > >>" with result "size:18" took too long (203.136444ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:26:11.026034  369719 out.go:286] unable to execute * 2020-11-09 21:26:01.785263 W | etcdserver: request "header:<ID:12712383767199948172 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb656994a8\" mod_revision:813 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb656994a8\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb656994a8\" > >>" with result "size:18" took too long (127.326282ms) to execute
	: html/template:* 2020-11-09 21:26:01.785263 W | etcdserver: request "header:<ID:12712383767199948172 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb656994a8\" mod_revision:813 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb656994a8\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3eb656994a8\" > >>" with result "size:18" took too long (127.326282ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:26:11.055352  369719 out.go:286] unable to execute * 2020-11-09 21:26:02.578528 W | etcdserver: request "header:<ID:12712383767199948192 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3f8c551cd70\" mod_revision:1110 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3f8c551cd70\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3f8c551cd70\" > >>" with result "size:18" took too long (192.131347ms) to execute
	: html/template:* 2020-11-09 21:26:02.578528 W | etcdserver: request "header:<ID:12712383767199948192 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3f8c551cd70\" mod_revision:1110 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3f8c551cd70\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-74f9689fd7-h8p7p.1645f3f8c551cd70\" > >>" with result "size:18" took too long (192.131347ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:26:11.066647  369719 out.go:286] unable to execute * 2020-11-09 21:26:03.086578 W | etcdserver: request "header:<ID:12712383767199948217 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-certs-create-6ffdw.1645f3ee6b1bf292\" mod_revision:852 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-certs-create-6ffdw.1645f3ee6b1bf292\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-certs-create-6ffdw.1645f3ee6b1bf292\" > >>" with result "size:18" took too long (109.298167ms) to execute
	: html/template:* 2020-11-09 21:26:03.086578 W | etcdserver: request "header:<ID:12712383767199948217 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-certs-create-6ffdw.1645f3ee6b1bf292\" mod_revision:852 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-certs-create-6ffdw.1645f3ee6b1bf292\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-certs-create-6ffdw.1645f3ee6b1bf292\" > >>" with result "size:18" took too long (109.298167ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20201109132301-342799 -n addons-20201109132301-342799

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:255: (dbg) Run:  kubectl --context addons-20201109132301-342799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: task-pv-pod gcp-auth-74f9689fd7-h8p7p ingress-nginx-admission-create-czqhj ingress-nginx-admission-patch-q9t46
helpers_test.go:263: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context addons-20201109132301-342799 describe pod task-pv-pod gcp-auth-74f9689fd7-h8p7p ingress-nginx-admission-create-czqhj ingress-nginx-admission-patch-q9t46
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context addons-20201109132301-342799 describe pod task-pv-pod gcp-auth-74f9689fd7-h8p7p ingress-nginx-admission-create-czqhj ingress-nginx-admission-patch-q9t46: exit status 1 (415.438802ms)

                                                
                                                
-- stdout --
	Name:         task-pv-pod
	Namespace:    default
	Priority:     0
	Node:         addons-20201109132301-342799/192.168.49.16
	Start Time:   Mon, 09 Nov 2020 13:25:39 -0800
	Labels:       app=task-pv-pod
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-q7wzn (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  default-token-q7wzn:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-q7wzn
	    Optional:    false
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason                  Age   From                     Message
	  ----    ------                  ----  ----                     -------
	  Normal  Scheduled               35s   default-scheduler        Successfully assigned default/task-pv-pod to addons-20201109132301-342799
	  Normal  SuccessfulAttachVolume  35s   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-0cf96e21-5e74-4972-a50d-167015748f6f"
	  Normal  Pulling                 11s   kubelet                  Pulling image "nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-74f9689fd7-h8p7p" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-czqhj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-q9t46" not found

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context addons-20201109132301-342799 describe pod task-pv-pod gcp-auth-74f9689fd7-h8p7p ingress-nginx-admission-create-czqhj ingress-nginx-admission-patch-q9t46: exit status 1
--- FAIL: TestAddons/parallel/Registry (36.13s)

                                                
                                    
x
+
TestSkaffold (60.52s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:53: (dbg) Run:  /tmp/skaffold.exe189393527 version
skaffold_test.go:57: skaffold version: v1.16.0
skaffold_test.go:60: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20201109133744-342799 --memory=2600 --driver=docker 
skaffold_test.go:60: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20201109133744-342799 --memory=2600 --driver=docker : (36.627884526s)
skaffold_test.go:73: copying out/minikube-linux-amd64 to /home/jenkins/workspace/docker_Linux_integration/out/minikube
skaffold_test.go:97: (dbg) Run:  /tmp/skaffold.exe189393527 run --minikube-profile skaffold-20201109133744-342799 --kube-context skaffold-20201109133744-342799 --status-check=true --port-forward=false
skaffold_test.go:97: (dbg) Non-zero exit: /tmp/skaffold.exe189393527 run --minikube-profile skaffold-20201109133744-342799 --kube-context skaffold-20201109133744-342799 --status-check=true --port-forward=false: exit status 1 (2.823409105s)

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Error checking cache.

                                                
                                                
-- /stdout --
** stderr ** 
	failed to build: getting imageID for leeroy-web:latest: The server probably has client authentication (--tlsverify) enabled. Please check your TLS client certification settings: Get "https://192.168.59.16:2376/v1.24/images/leeroy-web:latest/json": remote error: tls: bad certificate

                                                
                                                
** /stderr **
skaffold_test.go:99: error running skaffold: exit status 1

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Error checking cache.

                                                
                                                
-- /stdout --
** stderr ** 
	failed to build: getting imageID for leeroy-web:latest: The server probably has client authentication (--tlsverify) enabled. Please check your TLS client certification settings: Get "https://192.168.59.16:2376/v1.24/images/leeroy-web:latest/json": remote error: tls: bad certificate

                                                
                                                
** /stderr **
panic.go:617: *** TestSkaffold FAILED at 2020-11-09 13:38:24.824607477 -0800 PST m=+1120.100220318
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect skaffold-20201109133744-342799
helpers_test.go:229: (dbg) docker inspect skaffold-20201109133744-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cb049ddc480637916c6b0e1535b8205e2aea5f0467b9e87f10f08addb4c24e2a",
	        "Created": "2020-11-09T21:37:47.373456611Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 429504,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:37:47.960144915Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/cb049ddc480637916c6b0e1535b8205e2aea5f0467b9e87f10f08addb4c24e2a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cb049ddc480637916c6b0e1535b8205e2aea5f0467b9e87f10f08addb4c24e2a/hostname",
	        "HostsPath": "/var/lib/docker/containers/cb049ddc480637916c6b0e1535b8205e2aea5f0467b9e87f10f08addb4c24e2a/hosts",
	        "LogPath": "/var/lib/docker/containers/cb049ddc480637916c6b0e1535b8205e2aea5f0467b9e87f10f08addb4c24e2a/cb049ddc480637916c6b0e1535b8205e2aea5f0467b9e87f10f08addb4c24e2a-json.log",
	        "Name": "/skaffold-20201109133744-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "skaffold-20201109133744-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "skaffold-20201109133744-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2726297600,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/87dff99fe6ebbae50397ffe37e36bb311180889989b54962969abd86c0c0ba4d-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/87dff99fe6ebbae50397ffe37e36bb311180889989b54962969abd86c0c0ba4d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/87dff99fe6ebbae50397ffe37e36bb311180889989b54962969abd86c0c0ba4d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/87dff99fe6ebbae50397ffe37e36bb311180889989b54962969abd86c0c0ba4d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "skaffold-20201109133744-342799",
	                "Source": "/var/lib/docker/volumes/skaffold-20201109133744-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-20201109133744-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-20201109133744-342799",
	                "name.minikube.sigs.k8s.io": "skaffold-20201109133744-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f555a24bf7edee77e166ee3985cea2ca04a4ca4235115258e4a2d0c90a5c6a6f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33012"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f555a24bf7ed",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-20201109133744-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cb049ddc4806"
	                    ],
	                    "NetworkID": "513154d38da57fbead6e8bfd59661d71dc192f99c650a6dd9aa4685b1721348e",
	                    "EndpointID": "61221175fed1e9703ff8235c27ddb8cb56bebd5399a1448319f88a2ba5da268f",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p skaffold-20201109133744-342799 -n skaffold-20201109133744-342799
helpers_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p skaffold-20201109133744-342799 -n skaffold-20201109133744-342799: exit status 2 (5.923691375s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:38:30.805193  433805 status.go:372] Error apiserver status: https://192.168.59.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	

                                                
                                                
** /stderr **
helpers_test.go:233: status error: exit status 2 (may be ok)
helpers_test.go:238: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p skaffold-20201109133744-342799 logs -n 25
helpers_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 -p skaffold-20201109133744-342799 logs -n 25: exit status 110 (12.009881532s)

                                                
                                                
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:37:48 UTC, end at Mon 2020-11-09 21:38:34 UTC. --
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.482075700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.483493647Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.483547514Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.483574732Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.483590870Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.510486018Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.520129477Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.520167770Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.520180572Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.520405724Z" level=info msg="Loading containers: start."
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.755661191Z" level=info msg="Removing stale sandbox 4609224aa47e8ab8ad6eed161b266ca66c0b5f3133e8f5c6576847cb5df361da (f98df23e53a749d7d9a08e0e067430219b2e448b352b3b65c19636e0461469e3)"
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.758729863Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5622e1fe04571e10330faa962407de5746ebfcce0f3f60ecc23f8a520799e3ba 2d36d153afa59c5eb7ce3d6145d2951774bd5f52fc859100a3cc1b8a14116dc5], retrying...."
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.821724199Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.882783774Z" level=info msg="Loading containers: done."
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.922665535Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.922780314Z" level=info msg="Daemon has completed initialization"
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.940062647Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: time="2020-11-09T21:38:23.940092098Z" level=info msg="API listen on [::]:2376"
	* Nov 09 21:38:23 skaffold-20201109133744-342799 systemd[1]: Started Docker Application Container Engine.
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: http: TLS handshake error from 192.168.59.1:48278: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: http: TLS handshake error from 192.168.59.1:48280: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* Nov 09 21:38:23 skaffold-20201109133744-342799 dockerd[3182]: http: TLS handshake error from 192.168.59.1:48282: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* Nov 09 21:38:24 skaffold-20201109133744-342799 dockerd[3182]: http: TLS handshake error from 192.168.59.1:48306: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* Nov 09 21:38:24 skaffold-20201109133744-342799 dockerd[3182]: http: TLS handshake error from 192.168.59.1:48312: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* Nov 09 21:38:24 skaffold-20201109133744-342799 dockerd[3182]: http: TLS handshake error from 192.168.59.1:48310: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 207eed6b80913       0369cf4303ffd       16 seconds ago      Running             etcd                      1                   19f361ff2e58a
	* 0613447106fa7       607331163122e       16 seconds ago      Running             kube-apiserver            1                   0f0df78083e8f
	* 4fe511c42848d       8603821e1a7a5       16 seconds ago      Running             kube-controller-manager   1                   f1d996eedffb6
	* 4150ba072b68a       2f32d66b884f8       16 seconds ago      Running             kube-scheduler            1                   2d192b3c1abf4
	* 8f88ce793b5ce       0369cf4303ffd       31 seconds ago      Exited              etcd                      0                   5990cbaadffdb
	* b0f9957e25cd6       8603821e1a7a5       31 seconds ago      Exited              kube-controller-manager   0                   3f09e5d843cdb
	* 345e6bb5c8d25       607331163122e       31 seconds ago      Exited              kube-apiserver            0                   3769b94bf09d7
	* b786aa5e6736b       2f32d66b884f8       31 seconds ago      Exited              kube-scheduler            0                   8f6eb411e6280
	* 
	* ==> describe nodes <==
	* Name:               skaffold-20201109133744-342799
	* Roles:              master
	* Labels:             kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=skaffold-20201109133744-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=skaffold-20201109133744-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_38_19_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:38:15 +0000
	* Taints:             node.kubernetes.io/not-ready:NoSchedule
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  skaffold-20201109133744-342799
	*   AcquireTime:     <unset>
	*   RenewTime:       Mon, 09 Nov 2020 21:38:41 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:38:31 +0000   Mon, 09 Nov 2020 21:38:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:38:31 +0000   Mon, 09 Nov 2020 21:38:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:38:31 +0000   Mon, 09 Nov 2020 21:38:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:38:31 +0000   Mon, 09 Nov 2020 21:38:31 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.59.16
	*   Hostname:    skaffold-20201109133744-342799
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 b49e39b7153b4d2a8bae8cf656e853e3
	*   System UUID:                1d12f481-69a7-4b90-b1ef-5dc866e668a5
	*   Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.13
	*   Kubelet Version:            v1.19.2
	*   Kube-Proxy Version:         v1.19.2
	* Non-terminated Pods:          (4 in total)
	*   Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	*   kube-system                 etcd-skaffold-20201109133744-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	*   kube-system                 kube-apiserver-skaffold-20201109133744-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         20s
	*   kube-system                 kube-controller-manager-skaffold-20201109133744-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         20s
	*   kube-system                 kube-scheduler-skaffold-20201109133744-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         20s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                550m (6%)  0 (0%)
	*   memory             0 (0%)     0 (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                From     Message
	*   ----    ------                   ----               ----     -------
	*   Normal  NodeHasSufficientMemory  33s (x5 over 33s)  kubelet  Node skaffold-20201109133744-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    33s (x5 over 33s)  kubelet  Node skaffold-20201109133744-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     33s (x5 over 33s)  kubelet  Node skaffold-20201109133744-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 21s                kubelet  Starting kubelet.
	*   Normal  NodeHasSufficientMemory  21s                kubelet  Node skaffold-20201109133744-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    21s                kubelet  Node skaffold-20201109133744-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     21s                kubelet  Node skaffold-20201109133744-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             21s                kubelet  Node skaffold-20201109133744-342799 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  21s                kubelet  Updated Node Allocatable limit across pods
	* 
	* ==> dmesg <==
	* [  +0.000003] Call Trace:
	* [  +0.000006]  [<ffffffffa4537e7e>] ? dump_stack+0x66/0x88
	* [  +0.000004]  [<ffffffffa440a92b>] ? dump_header+0x78/0x1fd
	* [  +0.000002]  [<ffffffffa44031ac>] ? mem_cgroup_scan_tasks+0xcc/0x100
	* [  +0.000004]  [<ffffffffa43899fa>] ? oom_kill_process+0x22a/0x3f0
	* [  +0.000002]  [<ffffffffa4389e91>] ? out_of_memory+0x111/0x470
	* [  +0.000004]  [<ffffffffa43fe2f9>] ? mem_cgroup_out_of_memory+0x49/0x80
	* [  +0.000002]  [<ffffffffa4403b25>] ? mem_cgroup_oom_synchronize+0x325/0x340
	* [  +0.000003]  [<ffffffffa43fedc0>] ? mem_cgroup_css_reset+0xd0/0xd0
	* [  +0.000002]  [<ffffffffa438a21f>] ? pagefault_out_of_memory+0x2f/0x80
	* [  +0.000003]  [<ffffffffa42636cd>] ? __do_page_fault+0x4bd/0x4f0
	* [  +0.000004]  [<ffffffffa481ae52>] ? schedule+0x32/0x80
	* [  +0.000002]  [<ffffffffa4820b28>] ? page_fault+0x28/0x30
	* [  +0.000298] Memory cgroup out of memory: Kill process 372697 (registry-server) score 1751 or sacrifice child
	* [  +0.011930] Killed process 372697 (registry-server) total-vm:243704kB, anon-rss:64324kB, file-rss:16044kB, shmem-rss:0kB
	* [Nov 9 21:28] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:29] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:31] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:32] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +18.886743] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:33] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:34] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +28.771909] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:35] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:37] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [207eed6b8091] <==
	* 2020-11-09 21:38:34.762375 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2833" took too long (2.087717518s) to execute
	* 2020-11-09 21:38:34.762500 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.593064294s) to execute
	* 2020-11-09 21:38:35.815799 W | wal: sync duration of 1.046282869s, expected less than 1s
	* 2020-11-09 21:38:36.769808 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.00015875s) to execute
	* WARNING: 2020/11/09 21:38:36 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:38:36.883878 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.999825473s) to execute
	* WARNING: 2020/11/09 21:38:36 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:38:38.869555 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (2.000060772s) to execute
	* WARNING: 2020/11/09 21:38:38 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:38:39.236524 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	* 2020-11-09 21:38:39.884641 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.999883711s) to execute
	* WARNING: 2020/11/09 21:38:39 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:38:40.152285 W | wal: sync duration of 4.064667893s, expected less than 1s
	* 2020-11-09 21:38:40.483678 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" " with result "range_response_count:46 size:34238" took too long (5.715188429s) to execute
	* 2020-11-09 21:38:40.483811 W | etcdserver: request "header:<ID:8039336204080085147 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1645f4b854fc2129\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1645f4b854fc2129\" value_size:716 lease:8039336204080085145 >> failure:<>>" with result "size:16" took too long (4.667653501s) to execute
	* 2020-11-09 21:38:40.484183 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/system-cluster-critical\" " with result "range_response_count:1 size:476" took too long (5.714227654s) to execute
	* 2020-11-09 21:38:40.484266 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:2 size:1762" took too long (5.715621374s) to execute
	* 2020-11-09 21:38:40.484289 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:120" took too long (5.715479358s) to execute
	* 2020-11-09 21:38:40.484363 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-node-lease\" " with result "range_response_count:1 size:271" took too long (5.714601365s) to execute
	* 2020-11-09 21:38:40.484382 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-skaffold-20201109133744-342799\" " with result "range_response_count:1 size:6044" took too long (5.713992255s) to execute
	* 2020-11-09 21:38:40.489170 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.519685945s) to execute
	* 2020-11-09 21:38:40.489511 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2833" took too long (3.812694095s) to execute
	* 2020-11-09 21:38:40.775056 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:118" took too long (288.802773ms) to execute
	* 2020-11-09 21:38:40.776342 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:aggregate-to-admin\" " with result "range_response_count:1 size:840" took too long (289.024735ms) to execute
	* 2020-11-09 21:38:40.777527 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (208.084997ms) to execute
	* 
	* ==> etcd [8f88ce793b5c] <==
	* 2020-11-09 21:38:10.466990 W | auth: simple token is not cryptographically signed
	* 2020-11-09 21:38:10.475820 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* 2020-11-09 21:38:10.476407 I | etcdserver: 47984c33979a6f91 as single-node; fast-forwarding 9 ticks (election ticks 10)
	* raft2020/11/09 21:38:10 INFO: 47984c33979a6f91 switched to configuration voters=(5158957157623426961)
	* 2020-11-09 21:38:10.477080 I | etcdserver/membership: added member 47984c33979a6f91 [https://192.168.59.16:2380] to cluster 79741b01b410835d
	* 2020-11-09 21:38:10.479284 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-09 21:38:10.479485 I | embed: listening for peers on 192.168.59.16:2380
	* 2020-11-09 21:38:10.479563 I | embed: listening for metrics on http://127.0.0.1:2381
	* raft2020/11/09 21:38:11 INFO: 47984c33979a6f91 is starting a new election at term 1
	* raft2020/11/09 21:38:11 INFO: 47984c33979a6f91 became candidate at term 2
	* raft2020/11/09 21:38:11 INFO: 47984c33979a6f91 received MsgVoteResp from 47984c33979a6f91 at term 2
	* raft2020/11/09 21:38:11 INFO: 47984c33979a6f91 became leader at term 2
	* raft2020/11/09 21:38:11 INFO: raft.node: 47984c33979a6f91 elected leader 47984c33979a6f91 at term 2
	* 2020-11-09 21:38:11.264460 I | etcdserver: published {Name:skaffold-20201109133744-342799 ClientURLs:[https://192.168.59.16:2379]} to cluster 79741b01b410835d
	* 2020-11-09 21:38:11.264489 I | embed: ready to serve client requests
	* 2020-11-09 21:38:11.264518 I | embed: ready to serve client requests
	* 2020-11-09 21:38:11.265505 I | etcdserver: setting up the initial cluster version to 3.4
	* 2020-11-09 21:38:11.266703 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-09 21:38:11.266850 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-09 21:38:11.268117 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:38:11.269026 I | embed: serving client requests on 192.168.59.16:2379
	* 2020-11-09 21:38:22.563856 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/09 21:38:22 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* WARNING: 2020/11/09 21:38:22 grpc: addrConn.createTransport failed to connect to {192.168.59.16:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.59.16:2379: connect: connection refused". Reconnecting...
	* 2020-11-09 21:38:22.571773 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> kernel <==
	*  21:38:41 up  1:21,  0 users,  load average: 4.10, 3.22, 4.41
	* Linux skaffold-20201109133744-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [0613447106fa] <==
	* Trace[2072022263]: [5.718419036s] [5.718419036s] END
	* I1109 21:38:40.485009       1 trace.go:205] Trace[1055711581]: "List etcd3" key:/clusterrolebindings,resourceVersion:,resourceVersionMatch:,limit:0,continue: (09-Nov-2020 21:38:34.768) (total time: 5716ms):
	* Trace[1055711581]: [5.716958069s] [5.716958069s] END
	* I1109 21:38:40.485079       1 trace.go:205] Trace[1946411930]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.19.2 (linux/amd64) kubernetes/f574309,client:192.168.59.16 (09-Nov-2020 21:38:35.460) (total time: 5024ms):
	* Trace[1946411930]: ---"Object stored in database" 5024ms (21:38:00.485)
	* Trace[1946411930]: [5.024578257s] [5.024578257s] END
	* I1109 21:38:40.485092       1 trace.go:205] Trace[1098500105]: "List" url:/api/v1/services,user-agent:kube-apiserver/v1.19.2 (linux/amd64) kubernetes/f574309,client:127.0.0.1 (09-Nov-2020 21:38:34.768) (total time: 5716ms):
	* Trace[1098500105]: ---"Listing from storage done" 5716ms (21:38:00.484)
	* Trace[1098500105]: [5.716781117s] [5.716781117s] END
	* I1109 21:38:40.485202       1 trace.go:205] Trace[1297410251]: "GuaranteedUpdate etcd3" type:*core.RangeAllocation (09-Nov-2020 21:38:34.768) (total time: 5716ms):
	* Trace[1297410251]: ---"initial value restored" 5716ms (21:38:00.485)
	* Trace[1297410251]: [5.716598685s] [5.716598685s] END
	* I1109 21:38:40.485299       1 trace.go:205] Trace[1683290812]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-skaffold-20201109133744-342799,user-agent:kubelet/v1.19.2 (linux/amd64) kubernetes/f574309,client:192.168.59.16 (09-Nov-2020 21:38:34.769) (total time: 5715ms):
	* Trace[1683290812]: ---"About to write a response" 5715ms (21:38:00.484)
	* Trace[1683290812]: [5.715813252s] [5.715813252s] END
	* I1109 21:38:40.485427       1 trace.go:205] Trace[844336449]: "Get" url:/api/v1/namespaces/kube-node-lease,user-agent:kube-apiserver/v1.19.2 (linux/amd64) kubernetes/f574309,client:127.0.0.1 (09-Nov-2020 21:38:34.769) (total time: 5716ms):
	* Trace[844336449]: ---"About to write a response" 5716ms (21:38:00.485)
	* Trace[844336449]: [5.71610741s] [5.71610741s] END
	* I1109 21:38:40.485487       1 trace.go:205] Trace[1122773561]: "List" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings,user-agent:kube-apiserver/v1.19.2 (linux/amd64) kubernetes/f574309,client:127.0.0.1 (09-Nov-2020 21:38:34.768) (total time: 5717ms):
	* Trace[1122773561]: ---"Listing from storage done" 5717ms (21:38:00.485)
	* Trace[1122773561]: [5.717436404s] [5.717436404s] END
	* I1109 21:38:40.485907       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	* I1109 21:38:40.490618       1 trace.go:205] Trace[1264948912]: "Get" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kube-scheduler/v1.19.2 (linux/amd64) kubernetes/f574309/scheduler,client:192.168.59.16 (09-Nov-2020 21:38:36.676) (total time: 3814ms):
	* Trace[1264948912]: ---"About to write a response" 3814ms (21:38:00.490)
	* Trace[1264948912]: [3.814386069s] [3.814386069s] END
	* 
	* ==> kube-apiserver [345e6bb5c8d2] <==
	* I1109 21:38:22.572952       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1109 21:38:22.572956       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1109 21:38:22.572991       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1109 21:38:22.572995       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573009       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573054       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573096       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573086       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573116       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573152       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573156       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1109 21:38:22.573164       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1109 21:38:22.573172       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.572623       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573230       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.572569       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1109 21:38:22.573261       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1109 21:38:22.573458       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573570       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1109 21:38:22.573587       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1109 21:38:22.573598       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573589       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573630       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573632       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:38:22.573807       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kube-controller-manager [4fe511c42848] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0xb1
	* crypto/tls.(*Conn).readFromUntil(0xc0005f5180, 0x49faac0, 0xc0002fc0f8, 0x5, 0xc0002fc0f8, 0x38c)
	* 	/usr/local/go/src/crypto/tls/conn.go:801 +0xf3
	* crypto/tls.(*Conn).readRecordOrCCS(0xc0005f5180, 0x0, 0x0, 0xc00070fd18)
	* 	/usr/local/go/src/crypto/tls/conn.go:608 +0x115
	* crypto/tls.(*Conn).readRecord(...)
	* 	/usr/local/go/src/crypto/tls/conn.go:576
	* crypto/tls.(*Conn).Read(0xc0005f5180, 0xc000c3d000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	* 	/usr/local/go/src/crypto/tls/conn.go:1252 +0x15f
	* bufio.(*Reader).Read(0xc0006ff7a0, 0xc000c24118, 0x9, 0x9, 0xc00070fd18, 0x45add00, 0x9be4eb)
	* 	/usr/local/go/src/bufio/bufio.go:227 +0x222
	* io.ReadAtLeast(0x49f4740, 0xc0006ff7a0, 0xc000c24118, 0x9, 0x9, 0x9, 0xc00007e050, 0x0, 0x49f4b60)
	* 	/usr/local/go/src/io/io.go:314 +0x87
	* io.ReadFull(...)
	* 	/usr/local/go/src/io/io.go:333
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000c24118, 0x9, 0x9, 0x49f4740, 0xc0006ff7a0, 0x0, 0x0, 0xc000f32cc0, 0x0)
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000c240e0, 0xc000f32cc0, 0x0, 0x0, 0x0)
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00070ffa8, 0x0, 0x0)
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1794 +0xd8
	* k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000349680)
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1716 +0x6f
	* created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:695 +0x66e
	* 
	* ==> kube-controller-manager [b0f9957e25cd] <==
	* 
	* ==> kube-scheduler [4150ba072b68] <==
	* I1109 21:38:25.158056       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:38:25.158125       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:38:26.069723       1 serving.go:331] Generated self-signed cert in-memory
	* W1109 21:38:30.873498       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1109 21:38:30.873551       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W1109 21:38:30.873575       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1109 21:38:30.873586       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1109 21:38:30.968235       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:38:30.968270       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:38:30.972491       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* E1109 21:38:30.976919       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* I1109 21:38:30.976964       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:38:30.976971       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:38:30.976993       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E1109 21:38:30.979902       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:38:30.980284       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:38:30.980448       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:38:30.980653       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:38:31.058150       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* I1109 21:38:31.077782       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kube-scheduler [b786aa5e6736] <==
	* I1109 21:38:15.874970       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:38:15.875048       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:38:15.876594       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E1109 21:38:15.881826       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:38:15.881848       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:38:15.881982       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:38:15.882299       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:38:15.882396       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:38:15.882436       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:38:15.882544       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:38:15.882550       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:38:15.882953       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:38:15.885713       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:38:15.885774       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:38:15.885795       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:38:15.885779       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:38:16.686009       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:38:16.727863       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:38:16.872852       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:38:16.903358       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:38:16.982675       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:38:17.021774       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:38:17.029525       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:38:17.058471       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* I1109 21:38:18.575300       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:37:48 UTC, end at Mon 2020-11-09 21:38:42 UTC. --
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:23.225526    2600 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "etcd-skaffold-20201109133744-342799_kube-system(561b93386ca952ecb4af306a778418ac)" failed: rpc error: code = Unknown desc = failed to inspect sandbox image "k8s.gcr.io/pause:3.2": Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:23.225543    2600 kuberuntime_manager.go:730] createPodSandbox for pod "etcd-skaffold-20201109133744-342799_kube-system(561b93386ca952ecb4af306a778418ac)" failed: rpc error: code = Unknown desc = failed to inspect sandbox image "k8s.gcr.io/pause:3.2": Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:23.225607    2600 pod_workers.go:191] Error syncing pod 561b93386ca952ecb4af306a778418ac ("etcd-skaffold-20201109133744-342799_kube-system(561b93386ca952ecb4af306a778418ac)"), skipping: failed to "CreatePodSandbox" for "etcd-skaffold-20201109133744-342799_kube-system(561b93386ca952ecb4af306a778418ac)" with CreatePodSandboxError: "CreatePodSandbox for pod \"etcd-skaffold-20201109133744-342799_kube-system(561b93386ca952ecb4af306a778418ac)\" failed: rpc error: code = Unknown desc = failed to inspect sandbox image \"k8s.gcr.io/pause:3.2\": Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: W1109 21:38:23.229144    2600 pod_container_deletor.go:79] Container "3f09e5d843cdb7bd36af825c0f5985583af6551e35d8f5761b89a54fa57770c9" not found in pod's containers
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: W1109 21:38:23.229776    2600 status_manager.go:550] Failed to get status for pod "kube-controller-manager-skaffold-20201109133744-342799_kube-system(dcc127c185c80a61d90d8e659e768641)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-skaffold-20201109133744-342799": dial tcp 192.168.59.16:8443: connect: connection refused
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:23.414645    2600 remote_runtime.go:113] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-apiserver-skaffold-20201109133744-342799": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create?name=k8s_POD_kube-apiserver-skaffold-20201109133744-342799_kube-system_9dd5561cfff0f3c9ec8590732cfa485d_1": EOF
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:23.414647    2600 remote_runtime.go:113] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-controller-manager-skaffold-20201109133744-342799": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/f98df23e53a749d7d9a08e0e067430219b2e448b352b3b65c19636e0461469e3/start": EOF
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:23.414710    2600 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "kube-apiserver-skaffold-20201109133744-342799_kube-system(9dd5561cfff0f3c9ec8590732cfa485d)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-apiserver-skaffold-20201109133744-342799": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create?name=k8s_POD_kube-apiserver-skaffold-20201109133744-342799_kube-system_9dd5561cfff0f3c9ec8590732cfa485d_1": EOF
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:23.414716    2600 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "kube-controller-manager-skaffold-20201109133744-342799_kube-system(dcc127c185c80a61d90d8e659e768641)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-controller-manager-skaffold-20201109133744-342799": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/f98df23e53a749d7d9a08e0e067430219b2e448b352b3b65c19636e0461469e3/start": EOF
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:23.414736    2600 kuberuntime_manager.go:730] createPodSandbox for pod "kube-controller-manager-skaffold-20201109133744-342799_kube-system(dcc127c185c80a61d90d8e659e768641)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-controller-manager-skaffold-20201109133744-342799": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/f98df23e53a749d7d9a08e0e067430219b2e448b352b3b65c19636e0461469e3/start": EOF
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:23.414737    2600 kuberuntime_manager.go:730] createPodSandbox for pod "kube-apiserver-skaffold-20201109133744-342799_kube-system(9dd5561cfff0f3c9ec8590732cfa485d)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-apiserver-skaffold-20201109133744-342799": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create?name=k8s_POD_kube-apiserver-skaffold-20201109133744-342799_kube-system_9dd5561cfff0f3c9ec8590732cfa485d_1": EOF
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:23.414802    2600 pod_workers.go:191] Error syncing pod 9dd5561cfff0f3c9ec8590732cfa485d ("kube-apiserver-skaffold-20201109133744-342799_kube-system(9dd5561cfff0f3c9ec8590732cfa485d)"), skipping: failed to "CreatePodSandbox" for "kube-apiserver-skaffold-20201109133744-342799_kube-system(9dd5561cfff0f3c9ec8590732cfa485d)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-apiserver-skaffold-20201109133744-342799_kube-system(9dd5561cfff0f3c9ec8590732cfa485d)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-apiserver-skaffold-20201109133744-342799\": error during connect: Post \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create?name=k8s_POD_kube-apiserver-skaffold-20201109133744-342799_kube-system_9dd5561cfff0f3c9ec8590732cfa485d_1\": EOF"
	* Nov 09 21:38:23 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:23.414822    2600 pod_workers.go:191] Error syncing pod dcc127c185c80a61d90d8e659e768641 ("kube-controller-manager-skaffold-20201109133744-342799_kube-system(dcc127c185c80a61d90d8e659e768641)"), skipping: failed to "CreatePodSandbox" for "kube-controller-manager-skaffold-20201109133744-342799_kube-system(dcc127c185c80a61d90d8e659e768641)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-controller-manager-skaffold-20201109133744-342799_kube-system(dcc127c185c80a61d90d8e659e768641)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-controller-manager-skaffold-20201109133744-342799\": error during connect: Post \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/f98df23e53a749d7d9a08e0e067430219b2e448b352b3b65c19636e0461469e3/start\": EOF"
	* Nov 09 21:38:24 skaffold-20201109133744-342799 kubelet[2600]: W1109 21:38:24.244230    2600 pod_container_deletor.go:79] Container "f98df23e53a749d7d9a08e0e067430219b2e448b352b3b65c19636e0461469e3" not found in pod's containers
	* Nov 09 21:38:24 skaffold-20201109133744-342799 kubelet[2600]: W1109 21:38:24.250977    2600 status_manager.go:550] Failed to get status for pod "kube-apiserver-skaffold-20201109133744-342799_kube-system(9dd5561cfff0f3c9ec8590732cfa485d)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-skaffold-20201109133744-342799": dial tcp 192.168.59.16:8443: connect: connection refused
	* Nov 09 21:38:25 skaffold-20201109133744-342799 kubelet[2600]: W1109 21:38:25.360145    2600 status_manager.go:550] Failed to get status for pod "etcd-skaffold-20201109133744-342799_kube-system(561b93386ca952ecb4af306a778418ac)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-skaffold-20201109133744-342799": dial tcp 192.168.59.16:8443: connect: connection refused
	* Nov 09 21:38:25 skaffold-20201109133744-342799 kubelet[2600]: W1109 21:38:25.377434    2600 status_manager.go:550] Failed to get status for pod "kube-controller-manager-skaffold-20201109133744-342799_kube-system(dcc127c185c80a61d90d8e659e768641)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-skaffold-20201109133744-342799": dial tcp 192.168.59.16:8443: connect: connection refused
	* Nov 09 21:38:25 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:25.458273    2600 event.go:273] Unable to write event: 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.59.16:8443: connect: connection refused' (may retry after sleeping)
	* Nov 09 21:38:25 skaffold-20201109133744-342799 kubelet[2600]: W1109 21:38:25.462437    2600 status_manager.go:550] Failed to get status for pod "kube-apiserver-skaffold-20201109133744-342799_kube-system(9dd5561cfff0f3c9ec8590732cfa485d)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-skaffold-20201109133744-342799": dial tcp 192.168.59.16:8443: connect: connection refused
	* Nov 09 21:38:25 skaffold-20201109133744-342799 kubelet[2600]: W1109 21:38:25.484574    2600 status_manager.go:550] Failed to get status for pod "kube-scheduler-skaffold-20201109133744-342799_kube-system(ff7d12f9e4f14e202a85a7c5534a3129)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-skaffold-20201109133744-342799": dial tcp 192.168.59.16:8443: connect: connection refused
	* Nov 09 21:38:41 skaffold-20201109133744-342799 kubelet[2600]: W1109 21:38:41.072713    2600 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	* Nov 09 21:38:41 skaffold-20201109133744-342799 kubelet[2600]: W1109 21:38:41.074130    2600 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	* Nov 09 21:38:41 skaffold-20201109133744-342799 kubelet[2600]: I1109 21:38:41.658973    2600 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b0f9957e25cd678bb4055c9c758430b7486edec83bd76faa60c5524ff17b7ad0
	* Nov 09 21:38:41 skaffold-20201109133744-342799 kubelet[2600]: I1109 21:38:41.659390    2600 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 4fe511c42848d9185e2b5128bbea544e3c2d34543f58ecce27b3b2662fffce9f
	* Nov 09 21:38:41 skaffold-20201109133744-342799 kubelet[2600]: E1109 21:38:41.660293    2600 pod_workers.go:191] Error syncing pod dcc127c185c80a61d90d8e659e768641 ("kube-controller-manager-skaffold-20201109133744-342799_kube-system(dcc127c185c80a61d90d8e659e768641)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-skaffold-20201109133744-342799_kube-system(dcc127c185c80a61d90d8e659e768641)"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:38:41.739554  434082 out.go:286] unable to execute * 2020-11-09 21:38:40.483811 W | etcdserver: request "header:<ID:8039336204080085147 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1645f4b854fc2129\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1645f4b854fc2129\" value_size:716 lease:8039336204080085145 >> failure:<>>" with result "size:16" took too long (4.667653501s) to execute
	: html/template:* 2020-11-09 21:38:40.483811 W | etcdserver: request "header:<ID:8039336204080085147 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1645f4b854fc2129\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1645f4b854fc2129\" value_size:716 lease:8039336204080085145 >> failure:<>>" with result "size:16" took too long (4.667653501s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:38:42.445943  434082 logs.go:181] command /bin/bash -c "docker logs --tail 25 b0f9957e25cd" failed with error: /bin/bash -c "docker logs --tail 25 b0f9957e25cd": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: b0f9957e25cd
	 output: "\n** stderr ** \nError: No such container: b0f9957e25cd\n\n** /stderr **"
	! unable to fetch logs for: kube-controller-manager [b0f9957e25cd]

                                                
                                                
** /stderr **
helpers_test.go:243: failed logs error: exit status 110
helpers_test.go:171: Cleaning up "skaffold-20201109133744-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20201109133744-342799
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20201109133744-342799: (2.601170269s)
--- FAIL: TestSkaffold (60.52s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:929: (dbg) Run:  kubectl --context functional-20201109132758-342799 replace --force -f testdata/mysql.yaml
functional_test.go:929: (dbg) Non-zero exit: kubectl --context functional-20201109132758-342799 replace --force -f testdata/mysql.yaml: exit status 1 (87.237618ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.16:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:931: failed to kubectl replace mysql: args "kubectl --context functional-20201109132758-342799 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect functional-20201109132758-342799
helpers_test.go:229: (dbg) docker inspect functional-20201109132758-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24",
	        "Created": "2020-11-09T21:28:00.37187081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 376738,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:28:00.960644046Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24/hostname",
	        "HostsPath": "/var/lib/docker/containers/7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24/hosts",
	        "LogPath": "/var/lib/docker/containers/7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24/7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24-json.log",
	        "Name": "/functional-20201109132758-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20201109132758-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20201109132758-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/df0e95b8d2e740ac9fd4faebf5b3e8bcafd6bccb9d4c5c3929b9240e5a0b0d76-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df0e95b8d2e740ac9fd4faebf5b3e8bcafd6bccb9d4c5c3929b9240e5a0b0d76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df0e95b8d2e740ac9fd4faebf5b3e8bcafd6bccb9d4c5c3929b9240e5a0b0d76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df0e95b8d2e740ac9fd4faebf5b3e8bcafd6bccb9d4c5c3929b9240e5a0b0d76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20201109132758-342799",
	                "Source": "/var/lib/docker/volumes/functional-20201109132758-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20201109132758-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20201109132758-342799",
	                "name.minikube.sigs.k8s.io": "functional-20201109132758-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7f4d1706982ed361770b3c3369899f16f62568439a75c6ea340b652b7323965b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7f4d1706982e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20201109132758-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7858cd2d1cc5"
	                    ],
	                    "NetworkID": "eb960218460f121c58b440780515f4be612f339ac57d79feea78d1a8b7c3a107",
	                    "EndpointID": "5b5005f05bc2bb73a04355957f78b7c80f64e9da505255c3afb87c886cac7394",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20201109132758-342799 -n functional-20201109132758-342799
helpers_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-20201109132758-342799 -n functional-20201109132758-342799: exit status 2 (443.380884ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:233: status error: exit status 2 (may be ok)
helpers_test.go:238: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 logs -n 25

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20201109132758-342799 logs -n 25: exit status 110 (2.727057109s)

                                                
                                                
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:28:01 UTC, end at Mon 2020-11-09 21:58:59 UTC. --
	* Nov 09 21:58:24 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:24.764233198Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 09 21:58:24 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:24.764287109Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 09 21:58:24 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:24.764299125Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 09 21:58:24 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:24.764578872Z" level=info msg="Loading containers: start."
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.100857944Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.176574832Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.191206000Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.193771908Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.258894135Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.424108426Z" level=info msg="Removing stale sandbox 725b2eed6e6d3e7933e03509e1bca581410370ef3e4818b4c98144ac61f0e685 (e6c14f47b2c5a2098ff8b55c3feaa43f179e62ff98009c74f4046b1cf6976abb)"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.427092623Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d4464ada8211ea7a0cc8872667fcd76c480b983eb59db3c8d28648a25e5661c1 c9fee957ded97eb86603367ea6d629891db56a57590cf04ab207d7cab514877e], retrying...."
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.866822315Z" level=info msg="Removing stale sandbox d7e5ff17476d5b1c5d6270adc70fe4fbc6eea58d132e0f4113342b828e611683 (5dc5e34b8efaaae38d2c3d474665d0150120246541be2e7b3adbda253d17dc31)"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.869148099Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d4464ada8211ea7a0cc8872667fcd76c480b983eb59db3c8d28648a25e5661c1 ba8a3ddaf4db0d549008b0947e52de674dbfa9bf94f848b5e7773e604462ba49], retrying...."
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.028481769Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.214463324Z" level=info msg="Removing stale sandbox fead347bc9e2e4479fcf94197026a8d40d5d179ee63ad5be0a4f3c8704dd0681 (9f918c928b3bb06b149b9ad878f48b5c33d494213e54a8f1609e633f2c5ea845)"
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.217070098Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d4464ada8211ea7a0cc8872667fcd76c480b983eb59db3c8d28648a25e5661c1 7f05d508e76152ef0b2d54803c56c1029ad7541683d57ec50eba1e89c11a937e], retrying...."
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.240050377Z" level=info msg="There are old running containers, the network config will not take affect"
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.252576542Z" level=info msg="Loading containers: done."
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.298616439Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.298717241Z" level=info msg="Daemon has completed initialization"
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.314904676Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 09 21:58:26 functional-20201109132758-342799 systemd[1]: Started Docker Application Container Engine.
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.315319174Z" level=info msg="API listen on [::]:2376"
	* Nov 09 21:58:27 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:27.325036592Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Nov 09 21:58:48 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:48.557727770Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* d9131ed9fd099       0369cf4303ffd       14 seconds ago      Running             etcd                      4                   81b7be08aa5b6
	* c914f39d5ec3e       2f32d66b884f8       15 seconds ago      Running             kube-scheduler            4                   594bfb1f87d9a
	* 2f7f822063668       8603821e1a7a5       17 seconds ago      Exited              kube-controller-manager   4                   649abba9fbd51
	* bdd405c5e3967       d373dd5a8593a       26 seconds ago      Running             kube-proxy                3                   b0b630b831212
	* cfd9a5403c37d       bfe3a36ebd252       32 seconds ago      Running             coredns                   3                   3a5c2b6efba41
	* 0e50d449739b5       607331163122e       32 seconds ago      Exited              kube-apiserver            4                   d1749c00b7eb5
	* e9ec0f29f46e5       2f32d66b884f8       36 seconds ago      Exited              kube-scheduler            3                   5dc5e34b8efaa
	* 630f03ce9ad84       8603821e1a7a5       38 seconds ago      Exited              kube-controller-manager   3                   9f918c928b3bb
	* 205a57ef77fca       0369cf4303ffd       39 seconds ago      Exited              etcd                      3                   e6c14f47b2c5a
	* 55a96eb3b0023       bad58561c4be7       8 minutes ago       Exited              storage-provisioner       5                   f291a6e675cd6
	* dd4ed2167532d       bfe3a36ebd252       29 minutes ago      Exited              coredns                   2                   bb4fc5d49eb91
	* efbc05f8770f7       d373dd5a8593a       29 minutes ago      Exited              kube-proxy                2                   dacac3b545b18
	* 
	* ==> coredns [cfd9a5403c37] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* Trace[2019727887]: [10.000796971s] [10.000796971s] END
	* E1109 21:58:37.701635       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	* I1109 21:58:37.701664       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:58:27.700648202 +0000 UTC m=+0.028486122) (total time: 10.000838623s):
	* Trace[1427131847]: [10.000838623s] [10.000838623s] END
	* E1109 21:58:37.701670       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	* I1109 21:58:37.701694       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:58:27.700579337 +0000 UTC m=+0.028417240) (total time: 10.000910726s):
	* Trace[939984059]: [10.000910726s] [10.000910726s] END
	* E1109 21:58:37.701702       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	* I1109 21:58:49.487146       1 trace.go:116] Trace[336122540]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:58:38.556580457 +0000 UTC m=+10.884418482) (total time: 10.930517362s):
	* Trace[336122540]: [10.930517362s] [10.930517362s] END
	* E1109 21:58:49.487173       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I1109 21:58:49.487175       1 trace.go:116] Trace[208240456]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:58:38.841535144 +0000 UTC m=+11.169373156) (total time: 10.645604392s):
	* Trace[208240456]: [10.645604392s] [10.645604392s] END
	* E1109 21:58:49.487194       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I1109 21:58:49.487359       1 trace.go:116] Trace[1106410694]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:58:39.05244409 +0000 UTC m=+11.380282117) (total time: 10.434864905s):
	* Trace[1106410694]: [10.434864905s] [10.434864905s] END
	* E1109 21:58:49.487393       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:58:51.433121       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:58:51.569531       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:58:52.391088       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:58:55.788242       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:58:55.852058       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:58:57.092399       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* 
	* ==> coredns [dd4ed2167532] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +1.530005] IPv4: martian source 10.85.0.5 from 10.85.0.5, on dev eth0
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 82 d8 18 59 4a 0a 08 06        .........YJ...
	* [  +1.203191] IPv4: martian source 10.85.0.6 from 10.85.0.6, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 8a cb 1f 38 df 95 08 06        .........8....
	* [  +1.852838] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 5e 24 5d fa e9 1b 08 06        ......^$].....
	* [  +1.041635] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 76 68 af f7 2c 06 08 06        ......vh..,...
	* [  +1.116263] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth4d8bcb5f
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 36 36 42 29 05 47 08 06        ......66B).G..
	* [ +10.297294] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth9e927cd5
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff de 6c 3a 49 99 f8 08 06        .......l:I....
	* [Nov 9 21:58] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +12.171317] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 9a 2f 72 7c a6 45 08 06        ......./r|.E..
	* [  +0.000006] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev eth0
	* [  +0.000001] ll header: 00000000: ff ff ff ff ff ff 9a 2f 72 7c a6 45 08 06        ......./r|.E..
	* [  +0.372482] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 2f 72 7c a6 45 08 06        ......./r|.E..
	* [  +2.327611] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 76 c5 aa d5 70 3b 08 06        ......v...p;..
	* [ +18.378177] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 76 c5 aa d5 70 3b 08 06        ......v...p;..
	* [  +0.000462] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 9a 2f 72 7c a6 45 08 06        ......./r|.E..
	* 
	* ==> etcd [205a57ef77fc] <==
	* 2020-11-09 21:58:21.207944 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-09 21:58:21.208179 I | embed: listening for metrics on http://127.0.0.1:2381
	* 2020-11-09 21:58:21.208331 I | embed: listening for peers on 192.168.49.16:2380
	* raft2020/11/09 21:58:22 INFO: 9715fd91f1feb06b is starting a new election at term 4
	* raft2020/11/09 21:58:22 INFO: 9715fd91f1feb06b became candidate at term 5
	* raft2020/11/09 21:58:22 INFO: 9715fd91f1feb06b received MsgVoteResp from 9715fd91f1feb06b at term 5
	* raft2020/11/09 21:58:22 INFO: 9715fd91f1feb06b became leader at term 5
	* raft2020/11/09 21:58:22 INFO: raft.node: 9715fd91f1feb06b elected leader 9715fd91f1feb06b at term 5
	* 2020-11-09 21:58:22.685643 I | etcdserver: published {Name:functional-20201109132758-342799 ClientURLs:[https://192.168.49.16:2379]} to cluster 4b991d4e91d62980
	* 2020-11-09 21:58:22.685743 I | embed: ready to serve client requests
	* 2020-11-09 21:58:22.685873 I | embed: ready to serve client requests
	* 2020-11-09 21:58:22.687799 I | embed: serving client requests on 192.168.49.16:2379
	* 2020-11-09 21:58:22.687842 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:58:23.169409 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.49.16\" " with result "range_response_count:1 size:135" took too long (298.015418ms) to execute
	* 2020-11-09 21:58:23.169777 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (290.7653ms) to execute
	* 2020-11-09 21:58:23.322536 I | embed: rejected connection from "127.0.0.1:52818" (error "read tcp 127.0.0.1:2379->127.0.0.1:52818: read: connection reset by peer", ServerName "")
	* 2020-11-09 21:58:23.322912 I | embed: rejected connection from "127.0.0.1:52770" (error "read tcp 127.0.0.1:2379->127.0.0.1:52770: read: connection reset by peer", ServerName "")
	* 2020-11-09 21:58:23.322966 I | embed: rejected connection from "127.0.0.1:52822" (error "read tcp 127.0.0.1:2379->127.0.0.1:52822: read: connection reset by peer", ServerName "")
	* 2020-11-09 21:58:23.322991 I | embed: rejected connection from "127.0.0.1:52824" (error "read tcp 127.0.0.1:2379->127.0.0.1:52824: read: connection reset by peer", ServerName "")
	* 2020-11-09 21:58:23.324362 I | embed: rejected connection from "127.0.0.1:52758" (error "read tcp 127.0.0.1:2379->127.0.0.1:52758: read: connection reset by peer", ServerName "")
	* 2020-11-09 21:58:24.904792 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/09 21:58:24 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* WARNING: 2020/11/09 21:58:25 grpc: addrConn.createTransport failed to connect to {192.168.49.16:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.49.16:2379: connect: connection refused". Reconnecting...
	* 2020-11-09 21:58:25.905793 I | etcdserver: skipped leadership transfer for single voting member cluster
	* WARNING: 2020/11/09 21:58:25 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: operation was canceled". Reconnecting...
	* 
	* ==> etcd [d9131ed9fd09] <==
	* 2020-11-09 21:58:54.515908 I | embed: initial cluster = 
	* 2020-11-09 21:58:54.544900 I | etcdserver: restarting member 9715fd91f1feb06b in cluster 4b991d4e91d62980 at commit index 2229
	* raft2020/11/09 21:58:54 INFO: 9715fd91f1feb06b switched to configuration voters=()
	* raft2020/11/09 21:58:54 INFO: 9715fd91f1feb06b became follower at term 5
	* raft2020/11/09 21:58:54 INFO: newRaft 9715fd91f1feb06b [peers: [], term: 5, commit: 2229, applied: 0, lastindex: 2229, lastterm: 5]
	* 2020-11-09 21:58:54.547501 W | auth: simple token is not cryptographically signed
	* 2020-11-09 21:58:54.549427 I | mvcc: restore compact to 1321
	* 2020-11-09 21:58:54.554450 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* raft2020/11/09 21:58:54 INFO: 9715fd91f1feb06b switched to configuration voters=(10886886477510127723)
	* 2020-11-09 21:58:54.555534 I | etcdserver/membership: added member 9715fd91f1feb06b [https://192.168.49.16:2380] to cluster 4b991d4e91d62980
	* 2020-11-09 21:58:54.555643 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-09 21:58:54.555728 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-09 21:58:54.558567 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-09 21:58:54.558654 I | embed: listening for peers on 192.168.49.16:2380
	* 2020-11-09 21:58:54.559026 I | embed: listening for metrics on http://127.0.0.1:2381
	* raft2020/11/09 21:58:56 INFO: 9715fd91f1feb06b is starting a new election at term 5
	* raft2020/11/09 21:58:56 INFO: 9715fd91f1feb06b became candidate at term 6
	* raft2020/11/09 21:58:56 INFO: 9715fd91f1feb06b received MsgVoteResp from 9715fd91f1feb06b at term 6
	* raft2020/11/09 21:58:56 INFO: 9715fd91f1feb06b became leader at term 6
	* raft2020/11/09 21:58:56 INFO: raft.node: 9715fd91f1feb06b elected leader 9715fd91f1feb06b at term 6
	* 2020-11-09 21:58:56.146818 I | embed: ready to serve client requests
	* 2020-11-09 21:58:56.146861 I | etcdserver: published {Name:functional-20201109132758-342799 ClientURLs:[https://192.168.49.16:2379]} to cluster 4b991d4e91d62980
	* 2020-11-09 21:58:56.146881 I | embed: ready to serve client requests
	* 2020-11-09 21:58:56.148658 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:58:56.148791 I | embed: serving client requests on 192.168.49.16:2379
	* 
	* ==> kernel <==
	*  21:59:00 up  1:41,  0 users,  load average: 11.72, 11.05, 9.67
	* Linux functional-20201109132758-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [0e50d449739b] <==
	* Flag --insecure-port has been deprecated, This flag will be removed in a future version.
	* I1109 21:58:27.624938       1 server.go:625] external host was not specified, using 192.168.49.16
	* I1109 21:58:27.625715       1 server.go:163] Version: v1.19.2
	* I1109 21:58:28.419553       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	* I1109 21:58:28.419585       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	* I1109 21:58:28.420986       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	* I1109 21:58:28.421010       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	* I1109 21:58:28.423361       1 client.go:360] parsed scheme: "endpoint"
	* I1109 21:58:28.423411       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	* W1109 21:58:28.423725       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1109 21:58:29.417938       1 client.go:360] parsed scheme: "endpoint"
	* I1109 21:58:29.417979       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	* W1109 21:58:29.418273       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:29.424253       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:30.419025       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:30.957965       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:32.036834       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:33.606991       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:34.670653       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:38.053928       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:38.907069       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:45.612419       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:45.616107       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* Error: context deadline exceeded
	* 
	* ==> kube-controller-manager [2f7f82206366] <==
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0003ee080, 0x49f9cc0, 0xc00080e2d0, 0x45ac601, 0xc00018c0c0)
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003ee080, 0x3b9aca00, 0x0, 0x1, 0xc00018c0c0)
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0003ee080, 0x3b9aca00, 0xc00018c0c0)
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	* created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1b3
	* 
	* goroutine 128 [select]:
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0003ee090, 0x49f9cc0, 0xc00061e330, 0x406501, 0xc00018c0c0)
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003ee090, 0xdf8475800, 0x0, 0xc000510701, 0xc00018c0c0)
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	* k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0003ee090, 0xdf8475800, 0xc00018c0c0)
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	* created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
	* 	/workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x22b
	* 
	* goroutine 157 [runnable]:
	* net/http.setRequestCancel.func4(0x0, 0xc000ca2630, 0xc0004a8500, 0xc0004b1488, 0xc00018cea0)
	* 	/usr/local/go/src/net/http/client.go:398 +0xe5
	* created by net/http.setRequestCancel
	* 	/usr/local/go/src/net/http/client.go:397 +0x337
	* 
	* ==> kube-controller-manager [630f03ce9ad8] <==
	* 
	* ==> kube-proxy [bdd405c5e396] <==
	* E1109 21:58:43.870689       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799": net/http: TLS handshake timeout
	* E1109 21:58:49.487488       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:51.638183       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:55.938707       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* 
	* ==> kube-proxy [efbc05f8770f] <==
	* E1109 21:29:25.688644       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:29:36.798011       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799": net/http: TLS handshake timeout
	* E1109 21:29:39.762072       1 node.go:125] Failed to retrieve node info: nodes "functional-20201109132758-342799" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	* I1109 21:29:44.703497       1 node.go:136] Successfully retrieved node IP: 192.168.49.16
	* I1109 21:29:44.703569       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.16), assume IPv4 operation
	* W1109 21:29:44.784619       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:29:44.784724       1 server_others.go:186] Using iptables Proxier.
	* W1109 21:29:44.784743       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1109 21:29:44.784750       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1109 21:29:44.785034       1 server.go:650] Version: v1.19.2
	* I1109 21:29:44.785707       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:29:44.786523       1 config.go:315] Starting service config controller
	* I1109 21:29:44.786550       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:29:44.786603       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:29:44.786616       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:29:44.886701       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I1109 21:29:44.886748       1 shared_informer.go:247] Caches are synced for service config 
	* 
	* ==> kube-scheduler [c914f39d5ec3] <==
	* E1109 21:58:52.615656       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.49.16:8441/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.052569       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.16:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.161257       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.16:8441/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.193787       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.16:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.210773       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.16:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.302860       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.16:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.478788       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.16:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.552680       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.16:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.559088       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.16:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.711409       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.734163       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.16:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.799613       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.16:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:56.164335       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.49.16:8441/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:57.045011       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.16:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:57.344004       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.16:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:57.641543       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.16:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:58.018498       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.16:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:58.106449       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.16:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:58.168242       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.16:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:58.301410       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.16:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:58.341756       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.16:8441/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:58.941596       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:58.981538       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.16:8441/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:59.004377       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.16:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:59.271484       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.16:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* 
	* ==> kube-scheduler [e9ec0f29f46e] <==
	* I1109 21:58:23.876286       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:58:23.876353       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:58:24.341894       1 serving.go:331] Generated self-signed cert in-memory
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:28:01 UTC, end at Mon 2020-11-09 21:59:01 UTC. --
	* Nov 09 21:58:57 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:57.767354    6062 reflector.go:127] object-"kube-system"/"kube-proxy-token-cn8nf": Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkube-proxy-token-cn8nf&resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:57 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:57.885752    6062 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8441/api/v1/pods?fieldSelector=spec.nodeName%3Dfunctional-20201109132758-342799&resourceVersion=1656": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:58 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:58.359106    6062 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8441/apis/storage.k8s.io/v1/csidrivers?resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:58 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:58.574119    6062 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.RuntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://control-plane.minikube.internal:8441/apis/node.k8s.io/v1beta1/runtimeclasses?resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:59.135767    6062 reflector.go:127] object-"kube-system"/"storage-provisioner-token-5w2rk": Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dstorage-provisioner-token-5w2rk&resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:59.475708    6062 reflector.go:127] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:59.490041    6062 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "functional-20201109132758-342799": Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799?resourceVersion=0&timeout=10s": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:59.490360    6062 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "functional-20201109132758-342799": Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799?timeout=10s": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:59.490587    6062 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "functional-20201109132758-342799": Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799?timeout=10s": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:59.490809    6062 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "functional-20201109132758-342799": Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799?timeout=10s": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:59.491133    6062 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "functional-20201109132758-342799": Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799?timeout=10s": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:59.491152    6062 kubelet_node_status.go:429] Unable to update node status: update node status exceeds retry count
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: I1109 21:58:59.777623    6062 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 630f03ce9ad844f24c5bec734477ac35695171976d8bb82b125676fedc2832af
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: I1109 21:58:59.778207    6062 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 2f7f822063668f5d9c74364ac7747ef30c89d2b72ef7196473e87b3a6d9431ec
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:59.779069    6062 pod_workers.go:191] Error syncing pod dcc127c185c80a61d90d8e659e768641 ("kube-controller-manager-functional-20201109132758-342799_kube-system(dcc127c185c80a61d90d8e659e768641)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20201109132758-342799_kube-system(dcc127c185c80a61d90d8e659e768641)"
	* Nov 09 21:58:59 functional-20201109132758-342799 kubelet[6062]: W1109 21:58:59.779488    6062 status_manager.go:550] Failed to get status for pod "kube-controller-manager-functional-20201109132758-342799_kube-system(dcc127c185c80a61d90d8e659e768641)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:59:00 functional-20201109132758-342799 kubelet[6062]: E1109 21:59:00.369894    6062 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:59:01 functional-20201109132758-342799 kubelet[6062]: I1109 21:59:01.194116    6062 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 55a96eb3b00231fe6e3bd13b10b73760ad5e6031a6714eff726102dfaf4b58c3
	* Nov 09 21:59:01 functional-20201109132758-342799 kubelet[6062]: W1109 21:59:01.194278    6062 status_manager.go:550] Failed to get status for pod "etcd-functional-20201109132758-342799_kube-system(c07f6bd14e48450e4d428f958a798e0e)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:59:01 functional-20201109132758-342799 kubelet[6062]: W1109 21:59:01.194697    6062 status_manager.go:550] Failed to get status for pod "kube-apiserver-functional-20201109132758-342799_kube-system(f5ecdefc6b776519ca22189eb9472242)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:59:01 functional-20201109132758-342799 kubelet[6062]: W1109 21:59:01.195174    6062 status_manager.go:550] Failed to get status for pod "kube-controller-manager-functional-20201109132758-342799_kube-system(dcc127c185c80a61d90d8e659e768641)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:59:01 functional-20201109132758-342799 kubelet[6062]: W1109 21:59:01.195581    6062 status_manager.go:550] Failed to get status for pod "kube-scheduler-functional-20201109132758-342799_kube-system(ff7d12f9e4f14e202a85a7c5534a3129)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:59:01 functional-20201109132758-342799 kubelet[6062]: W1109 21:59:01.196071    6062 status_manager.go:550] Failed to get status for pod "kube-proxy-c7tgz_kube-system(a32163c8-dd65-4327-b113-df1425934a57)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-c7tgz": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:59:01 functional-20201109132758-342799 kubelet[6062]: W1109 21:59:01.196495    6062 status_manager.go:550] Failed to get status for pod "coredns-f9fd979d6-sf7ct_kube-system(74ac25d3-7e2a-44d7-9605-80a4b19acb84)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-f9fd979d6-sf7ct": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:59:01 functional-20201109132758-342799 kubelet[6062]: W1109 21:59:01.196986    6062 status_manager.go:550] Failed to get status for pod "storage-provisioner_kube-system(8a2dc99d-6b4d-4f0b-9230-ac84422061b4)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner": dial tcp 192.168.49.16:8441: connect: connection refused
	* 
	* ==> storage-provisioner [55a96eb3b002] <==
	* I1109 21:50:53.548097       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1109 21:51:12.286110       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1109 21:51:12.287203       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_functional-20201109132758-342799_25e58adb-07af-41d5-9451-984ece52a972!
	* I1109 21:51:12.287838       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d68bb4f9-c53d-4241-bef8-3cce70269a69", APIVersion:"v1", ResourceVersion:"1370", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20201109132758-342799_25e58adb-07af-41d5-9451-984ece52a972 became leader
	* I1109 21:51:12.387684       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_functional-20201109132758-342799_25e58adb-07af-41d5-9451-984ece52a972!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:59:00.007441  666640 logs.go:181] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	E1109 13:59:00.749740  666640 logs.go:181] command /bin/bash -c "docker logs --tail 25 630f03ce9ad8" failed with error: /bin/bash -c "docker logs --tail 25 630f03ce9ad8": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 630f03ce9ad8
	 output: "\n** stderr ** \nError: No such container: 630f03ce9ad8\n\n** /stderr **"
	! unable to fetch logs for: describe nodes, kube-controller-manager [630f03ce9ad8]

                                                
                                                
** /stderr **
helpers_test.go:243: failed logs error: exit status 110
--- FAIL: TestFunctional/parallel/MySQL (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (37.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:175: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20201109132758-342799 docker-env) && out/minikube-linux-amd64 status -p functional-20201109132758-342799"
functional_test.go:175: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20201109132758-342799 docker-env) && out/minikube-linux-amd64 status -p functional-20201109132758-342799": exit status 2 (8.186548374s)

                                                
                                                
-- stdout --
	functional-20201109132758-342799
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:181: failed to do status after eval-ing docker-env. error: exit status 2
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DockerEnv]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect functional-20201109132758-342799
helpers_test.go:229: (dbg) docker inspect functional-20201109132758-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24",
	        "Created": "2020-11-09T21:28:00.37187081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 376738,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:28:00.960644046Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24/hostname",
	        "HostsPath": "/var/lib/docker/containers/7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24/hosts",
	        "LogPath": "/var/lib/docker/containers/7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24/7858cd2d1cc569692bfecfa4a76f6e42c288cf8f4bfbab92295b73794e68dd24-json.log",
	        "Name": "/functional-20201109132758-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20201109132758-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20201109132758-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/df0e95b8d2e740ac9fd4faebf5b3e8bcafd6bccb9d4c5c3929b9240e5a0b0d76-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df0e95b8d2e740ac9fd4faebf5b3e8bcafd6bccb9d4c5c3929b9240e5a0b0d76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df0e95b8d2e740ac9fd4faebf5b3e8bcafd6bccb9d4c5c3929b9240e5a0b0d76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df0e95b8d2e740ac9fd4faebf5b3e8bcafd6bccb9d4c5c3929b9240e5a0b0d76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20201109132758-342799",
	                "Source": "/var/lib/docker/volumes/functional-20201109132758-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20201109132758-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20201109132758-342799",
	                "name.minikube.sigs.k8s.io": "functional-20201109132758-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7f4d1706982ed361770b3c3369899f16f62568439a75c6ea340b652b7323965b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7f4d1706982e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20201109132758-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7858cd2d1cc5"
	                    ],
	                    "NetworkID": "eb960218460f121c58b440780515f4be612f339ac57d79feea78d1a8b7c3a107",
	                    "EndpointID": "5b5005f05bc2bb73a04355957f78b7c80f64e9da505255c3afb87c886cac7394",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20201109132758-342799 -n functional-20201109132758-342799

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
helpers_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-20201109132758-342799 -n functional-20201109132758-342799: exit status 2 (21.526744852s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:233: status error: exit status 2 (may be ok)
helpers_test.go:238: <<< TestFunctional/parallel/DockerEnv FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestFunctional/parallel/DockerEnv]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 logs -n 25

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
helpers_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20201109132758-342799 logs -n 25: exit status 110 (7.882340425s)

                                                
                                                
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:28:01 UTC, end at Mon 2020-11-09 21:58:54 UTC. --
	* Nov 09 21:58:24 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:24.764233198Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 09 21:58:24 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:24.764287109Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 09 21:58:24 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:24.764299125Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 09 21:58:24 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:24.764578872Z" level=info msg="Loading containers: start."
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.100857944Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.176574832Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.191206000Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.193771908Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.258894135Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.424108426Z" level=info msg="Removing stale sandbox 725b2eed6e6d3e7933e03509e1bca581410370ef3e4818b4c98144ac61f0e685 (e6c14f47b2c5a2098ff8b55c3feaa43f179e62ff98009c74f4046b1cf6976abb)"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.427092623Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d4464ada8211ea7a0cc8872667fcd76c480b983eb59db3c8d28648a25e5661c1 c9fee957ded97eb86603367ea6d629891db56a57590cf04ab207d7cab514877e], retrying...."
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.866822315Z" level=info msg="Removing stale sandbox d7e5ff17476d5b1c5d6270adc70fe4fbc6eea58d132e0f4113342b828e611683 (5dc5e34b8efaaae38d2c3d474665d0150120246541be2e7b3adbda253d17dc31)"
	* Nov 09 21:58:25 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:25.869148099Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d4464ada8211ea7a0cc8872667fcd76c480b983eb59db3c8d28648a25e5661c1 ba8a3ddaf4db0d549008b0947e52de674dbfa9bf94f848b5e7773e604462ba49], retrying...."
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.028481769Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.214463324Z" level=info msg="Removing stale sandbox fead347bc9e2e4479fcf94197026a8d40d5d179ee63ad5be0a4f3c8704dd0681 (9f918c928b3bb06b149b9ad878f48b5c33d494213e54a8f1609e633f2c5ea845)"
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.217070098Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint d4464ada8211ea7a0cc8872667fcd76c480b983eb59db3c8d28648a25e5661c1 7f05d508e76152ef0b2d54803c56c1029ad7541683d57ec50eba1e89c11a937e], retrying...."
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.240050377Z" level=info msg="There are old running containers, the network config will not take affect"
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.252576542Z" level=info msg="Loading containers: done."
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.298616439Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.298717241Z" level=info msg="Daemon has completed initialization"
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.314904676Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 09 21:58:26 functional-20201109132758-342799 systemd[1]: Started Docker Application Container Engine.
	* Nov 09 21:58:26 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:26.315319174Z" level=info msg="API listen on [::]:2376"
	* Nov 09 21:58:27 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:27.325036592Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Nov 09 21:58:48 functional-20201109132758-342799 dockerd[16121]: time="2020-11-09T21:58:48.557727770Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* d9131ed9fd099       0369cf4303ffd       9 seconds ago       Created             etcd                      4                   81b7be08aa5b6
	* c914f39d5ec3e       2f32d66b884f8       10 seconds ago      Running             kube-scheduler            4                   594bfb1f87d9a
	* 2f7f822063668       8603821e1a7a5       12 seconds ago      Running             kube-controller-manager   4                   649abba9fbd51
	* bdd405c5e3967       d373dd5a8593a       21 seconds ago      Running             kube-proxy                3                   b0b630b831212
	* cfd9a5403c37d       bfe3a36ebd252       27 seconds ago      Running             coredns                   3                   3a5c2b6efba41
	* 0e50d449739b5       607331163122e       27 seconds ago      Exited              kube-apiserver            4                   d1749c00b7eb5
	* e9ec0f29f46e5       2f32d66b884f8       31 seconds ago      Exited              kube-scheduler            3                   5dc5e34b8efaa
	* 630f03ce9ad84       8603821e1a7a5       33 seconds ago      Exited              kube-controller-manager   3                   9f918c928b3bb
	* 205a57ef77fca       0369cf4303ffd       34 seconds ago      Exited              etcd                      3                   e6c14f47b2c5a
	* 55a96eb3b0023       bad58561c4be7       8 minutes ago       Exited              storage-provisioner       5                   f291a6e675cd6
	* dd4ed2167532d       bfe3a36ebd252       29 minutes ago      Exited              coredns                   2                   bb4fc5d49eb91
	* 1eb996e347f57       607331163122e       29 minutes ago      Exited              kube-apiserver            3                   cb1d16261cbfa
	* efbc05f8770f7       d373dd5a8593a       29 minutes ago      Exited              kube-proxy                2                   dacac3b545b18
	* 
	* ==> coredns [cfd9a5403c37] <==
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* I1109 21:58:37.701595       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:58:27.700666703 +0000 UTC m=+0.028504622) (total time: 10.000796971s):
	* Trace[2019727887]: [10.000796971s] [10.000796971s] END
	* E1109 21:58:37.701635       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	* I1109 21:58:37.701664       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:58:27.700648202 +0000 UTC m=+0.028486122) (total time: 10.000838623s):
	* Trace[1427131847]: [10.000838623s] [10.000838623s] END
	* E1109 21:58:37.701670       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	* I1109 21:58:37.701694       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:58:27.700579337 +0000 UTC m=+0.028417240) (total time: 10.000910726s):
	* Trace[939984059]: [10.000910726s] [10.000910726s] END
	* E1109 21:58:37.701702       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	* I1109 21:58:49.487146       1 trace.go:116] Trace[336122540]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:58:38.556580457 +0000 UTC m=+10.884418482) (total time: 10.930517362s):
	* Trace[336122540]: [10.930517362s] [10.930517362s] END
	* E1109 21:58:49.487173       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I1109 21:58:49.487175       1 trace.go:116] Trace[208240456]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:58:38.841535144 +0000 UTC m=+11.169373156) (total time: 10.645604392s):
	* Trace[208240456]: [10.645604392s] [10.645604392s] END
	* E1109 21:58:49.487194       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* I1109 21:58:49.487359       1 trace.go:116] Trace[1106410694]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:58:39.05244409 +0000 UTC m=+11.380282117) (total time: 10.434864905s):
	* Trace[1106410694]: [10.434864905s] [10.434864905s] END
	* E1109 21:58:49.487393       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:58:51.433121       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:58:51.569531       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:58:52.391088       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* 
	* ==> coredns [dd4ed2167532] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +1.530005] IPv4: martian source 10.85.0.5 from 10.85.0.5, on dev eth0
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 82 d8 18 59 4a 0a 08 06        .........YJ...
	* [  +1.203191] IPv4: martian source 10.85.0.6 from 10.85.0.6, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 8a cb 1f 38 df 95 08 06        .........8....
	* [  +1.852838] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 5e 24 5d fa e9 1b 08 06        ......^$].....
	* [  +1.041635] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 76 68 af f7 2c 06 08 06        ......vh..,...
	* [  +1.116263] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth4d8bcb5f
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 36 36 42 29 05 47 08 06        ......66B).G..
	* [ +10.297294] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth9e927cd5
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff de 6c 3a 49 99 f8 08 06        .......l:I....
	* [Nov 9 21:58] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +12.171317] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 9a 2f 72 7c a6 45 08 06        ......./r|.E..
	* [  +0.000006] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev eth0
	* [  +0.000001] ll header: 00000000: ff ff ff ff ff ff 9a 2f 72 7c a6 45 08 06        ......./r|.E..
	* [  +0.372482] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a 2f 72 7c a6 45 08 06        ......./r|.E..
	* [  +2.327611] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 76 c5 aa d5 70 3b 08 06        ......v...p;..
	* [ +18.378177] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 76 c5 aa d5 70 3b 08 06        ......v...p;..
	* [  +0.000462] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 9a 2f 72 7c a6 45 08 06        ......./r|.E..
	* 
	* ==> etcd [205a57ef77fc] <==
	* 2020-11-09 21:58:21.207944 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-09 21:58:21.208179 I | embed: listening for metrics on http://127.0.0.1:2381
	* 2020-11-09 21:58:21.208331 I | embed: listening for peers on 192.168.49.16:2380
	* raft2020/11/09 21:58:22 INFO: 9715fd91f1feb06b is starting a new election at term 4
	* raft2020/11/09 21:58:22 INFO: 9715fd91f1feb06b became candidate at term 5
	* raft2020/11/09 21:58:22 INFO: 9715fd91f1feb06b received MsgVoteResp from 9715fd91f1feb06b at term 5
	* raft2020/11/09 21:58:22 INFO: 9715fd91f1feb06b became leader at term 5
	* raft2020/11/09 21:58:22 INFO: raft.node: 9715fd91f1feb06b elected leader 9715fd91f1feb06b at term 5
	* 2020-11-09 21:58:22.685643 I | etcdserver: published {Name:functional-20201109132758-342799 ClientURLs:[https://192.168.49.16:2379]} to cluster 4b991d4e91d62980
	* 2020-11-09 21:58:22.685743 I | embed: ready to serve client requests
	* 2020-11-09 21:58:22.685873 I | embed: ready to serve client requests
	* 2020-11-09 21:58:22.687799 I | embed: serving client requests on 192.168.49.16:2379
	* 2020-11-09 21:58:22.687842 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:58:23.169409 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.49.16\" " with result "range_response_count:1 size:135" took too long (298.015418ms) to execute
	* 2020-11-09 21:58:23.169777 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (290.7653ms) to execute
	* 2020-11-09 21:58:23.322536 I | embed: rejected connection from "127.0.0.1:52818" (error "read tcp 127.0.0.1:2379->127.0.0.1:52818: read: connection reset by peer", ServerName "")
	* 2020-11-09 21:58:23.322912 I | embed: rejected connection from "127.0.0.1:52770" (error "read tcp 127.0.0.1:2379->127.0.0.1:52770: read: connection reset by peer", ServerName "")
	* 2020-11-09 21:58:23.322966 I | embed: rejected connection from "127.0.0.1:52822" (error "read tcp 127.0.0.1:2379->127.0.0.1:52822: read: connection reset by peer", ServerName "")
	* 2020-11-09 21:58:23.322991 I | embed: rejected connection from "127.0.0.1:52824" (error "read tcp 127.0.0.1:2379->127.0.0.1:52824: read: connection reset by peer", ServerName "")
	* 2020-11-09 21:58:23.324362 I | embed: rejected connection from "127.0.0.1:52758" (error "read tcp 127.0.0.1:2379->127.0.0.1:52758: read: connection reset by peer", ServerName "")
	* 2020-11-09 21:58:24.904792 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/09 21:58:24 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* WARNING: 2020/11/09 21:58:25 grpc: addrConn.createTransport failed to connect to {192.168.49.16:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.49.16:2379: connect: connection refused". Reconnecting...
	* 2020-11-09 21:58:25.905793 I | etcdserver: skipped leadership transfer for single voting member cluster
	* WARNING: 2020/11/09 21:58:25 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: operation was canceled". Reconnecting...
	* 
	* ==> kernel <==
	*  21:58:55 up  1:41,  0 users,  load average: 12.78, 11.22, 9.71
	* Linux functional-20201109132758-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [0e50d449739b] <==
	* Flag --insecure-port has been deprecated, This flag will be removed in a future version.
	* I1109 21:58:27.624938       1 server.go:625] external host was not specified, using 192.168.49.16
	* I1109 21:58:27.625715       1 server.go:163] Version: v1.19.2
	* I1109 21:58:28.419553       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	* I1109 21:58:28.419585       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	* I1109 21:58:28.420986       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	* I1109 21:58:28.421010       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	* I1109 21:58:28.423361       1 client.go:360] parsed scheme: "endpoint"
	* I1109 21:58:28.423411       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	* W1109 21:58:28.423725       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1109 21:58:29.417938       1 client.go:360] parsed scheme: "endpoint"
	* I1109 21:58:29.417979       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
	* W1109 21:58:29.418273       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:29.424253       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:30.419025       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:30.957965       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:32.036834       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:33.606991       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:34.670653       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:38.053928       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:38.907069       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:45.612419       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:58:45.616107       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* Error: context deadline exceeded
	* 
	* ==> kube-apiserver [1eb996e347f5] <==
	* 
	* ==> kube-controller-manager [2f7f82206366] <==
	* Flag --port has been deprecated, see --secure-port instead.
	* I1109 21:58:42.929808       1 serving.go:331] Generated self-signed cert in-memory
	* I1109 21:58:43.363759       1 controllermanager.go:175] Version: v1.19.2
	* I1109 21:58:43.365053       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* I1109 21:58:43.365082       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I1109 21:58:43.365728       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
	* I1109 21:58:43.365762       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* W1109 21:58:49.487107       1 controllermanager.go:628] fetch api resource lists failed, use legacy client builder: Get "https://192.168.49.16:8441/api/v1?timeout=32s": dial tcp 192.168.49.16:8441: connect: connection refused
	* 
	* ==> kube-controller-manager [630f03ce9ad8] <==
	* Flag --port has been deprecated, see --secure-port instead.
	* I1109 21:58:23.194939       1 serving.go:331] Generated self-signed cert in-memory
	* I1109 21:58:24.339675       1 controllermanager.go:175] Version: v1.19.2
	* I1109 21:58:24.340654       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* I1109 21:58:24.340680       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I1109 21:58:24.341434       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
	* I1109 21:58:24.341553       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* W1109 21:58:24.341882       1 controllermanager.go:628] fetch api resource lists failed, use legacy client builder: Get "https://192.168.49.16:8441/api/v1?timeout=32s": dial tcp 192.168.49.16:8441: connect: connection refused
	* 
	* ==> kube-proxy [bdd405c5e396] <==
	* E1109 21:58:43.870689       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799": net/http: TLS handshake timeout
	* E1109 21:58:49.487488       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:51.638183       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* 
	* ==> kube-proxy [efbc05f8770f] <==
	* E1109 21:29:25.688644       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:29:36.798011       1 node.go:125] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20201109132758-342799": net/http: TLS handshake timeout
	* E1109 21:29:39.762072       1 node.go:125] Failed to retrieve node info: nodes "functional-20201109132758-342799" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	* I1109 21:29:44.703497       1 node.go:136] Successfully retrieved node IP: 192.168.49.16
	* I1109 21:29:44.703569       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.16), assume IPv4 operation
	* W1109 21:29:44.784619       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:29:44.784724       1 server_others.go:186] Using iptables Proxier.
	* W1109 21:29:44.784743       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1109 21:29:44.784750       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1109 21:29:44.785034       1 server.go:650] Version: v1.19.2
	* I1109 21:29:44.785707       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:29:44.786523       1 config.go:315] Starting service config controller
	* I1109 21:29:44.786550       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:29:44.786603       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:29:44.786616       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:29:44.886701       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I1109 21:29:44.886748       1 shared_informer.go:247] Caches are synced for service config 
	* 
	* ==> kube-scheduler [c914f39d5ec3] <==
	* E1109 21:58:50.497636       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.16:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:50.527251       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.16:8441/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:50.550892       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.49.16:8441/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:50.575830       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.16:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:50.585477       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.16:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:50.603711       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.16:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:50.625974       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.16:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:50.699880       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:50.756780       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.16:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:50.762623       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.16:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:50.767064       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.16:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:50.890679       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.16:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:52.250393       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.16:8441/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:52.615656       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.49.16:8441/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.052569       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.16:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.161257       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.16:8441/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.193787       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.16:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.210773       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.16:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.302860       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.16:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.478788       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.16:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.552680       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.16:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.559088       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.16:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.711409       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.734163       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.16:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* E1109 21:58:53.799613       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.16:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.16:8441: connect: connection refused
	* 
	* ==> kube-scheduler [e9ec0f29f46e] <==
	* I1109 21:58:23.876286       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:58:23.876353       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:58:24.341894       1 serving.go:331] Generated self-signed cert in-memory
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:28:01 UTC, end at Mon 2020-11-09 21:58:56 UTC. --
	* Nov 09 21:58:51 functional-20201109132758-342799 kubelet[6062]: W1109 21:58:51.909132    6062 status_manager.go:550] Failed to get status for pod "kube-proxy-c7tgz_kube-system(a32163c8-dd65-4327-b113-df1425934a57)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-c7tgz": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:52 functional-20201109132758-342799 kubelet[6062]: W1109 21:58:52.109134    6062 status_manager.go:550] Failed to get status for pod "coredns-f9fd979d6-sf7ct_kube-system(74ac25d3-7e2a-44d7-9605-80a4b19acb84)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-f9fd979d6-sf7ct": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:52 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:52.309210    6062 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8441/api/v1/pods?fieldSelector=spec.nodeName%3Dfunctional-20201109132758-342799&resourceVersion=1656": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:52 functional-20201109132758-342799 kubelet[6062]: W1109 21:58:52.509095    6062 status_manager.go:550] Failed to get status for pod "storage-provisioner_kube-system(8a2dc99d-6b4d-4f0b-9230-ac84422061b4)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:52 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:52.709164    6062 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8441/apis/storage.k8s.io/v1/csidrivers?resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:52 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:52.909195    6062 reflector.go:127] object-"kube-system"/"storage-provisioner-token-5w2rk": Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dstorage-provisioner-token-5w2rk&resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:53 functional-20201109132758-342799 kubelet[6062]: W1109 21:58:53.109144    6062 status_manager.go:550] Failed to get status for pod "etcd-functional-20201109132758-342799_kube-system(c07f6bd14e48450e4d428f958a798e0e)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:53 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:53.309371    6062 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.RuntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://control-plane.minikube.internal:8441/apis/node.k8s.io/v1beta1/runtimeclasses?resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:53 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:53.509178    6062 reflector.go:127] object-"kube-system"/"kube-proxy-token-cn8nf": Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkube-proxy-token-cn8nf&resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:53 functional-20201109132758-342799 kubelet[6062]: W1109 21:58:53.709121    6062 status_manager.go:550] Failed to get status for pod "kube-apiserver-functional-20201109132758-342799_kube-system(f5ecdefc6b776519ca22189eb9472242)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:53 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:53.909268    6062 reflector.go:127] object-"kube-system"/"coredns-token-fd667": Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dcoredns-token-fd667&resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:54 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:54.109422    6062 reflector.go:127] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:54 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:54.308961    6062 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?resourceVersion=457": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:54 functional-20201109132758-342799 kubelet[6062]: W1109 21:58:54.684731    6062 status_manager.go:550] Failed to get status for pod "kube-scheduler-functional-20201109132758-342799_kube-system(ff7d12f9e4f14e202a85a7c5534a3129)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:54 functional-20201109132758-342799 kubelet[6062]: I1109 21:58:54.696216    6062 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 1eb996e347f57b9e6177e96b01ebf2c0dad008b44b9d5be598453a5ef4d474eb
	* Nov 09 21:58:54 functional-20201109132758-342799 kubelet[6062]: I1109 21:58:54.696877    6062 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0e50d449739b50c10d896114a62770403360d20b04d73d5f2d16615791dcb302
	* Nov 09 21:58:54 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:54.698108    6062 pod_workers.go:191] Error syncing pod f5ecdefc6b776519ca22189eb9472242 ("kube-apiserver-functional-20201109132758-342799_kube-system(f5ecdefc6b776519ca22189eb9472242)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-20201109132758-342799_kube-system(f5ecdefc6b776519ca22189eb9472242)"
	* Nov 09 21:58:54 functional-20201109132758-342799 kubelet[6062]: W1109 21:58:54.708909    6062 status_manager.go:550] Failed to get status for pod "kube-apiserver-functional-20201109132758-342799_kube-system(f5ecdefc6b776519ca22189eb9472242)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:54 functional-20201109132758-342799 kubelet[6062]: W1109 21:58:54.910944    6062 status_manager.go:550] Failed to get status for pod "etcd-functional-20201109132758-342799_kube-system(c07f6bd14e48450e4d428f958a798e0e)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:55 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:55.336227    6062 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-20201109132758-342799&resourceVersion=1532": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:55 functional-20201109132758-342799 kubelet[6062]: I1109 21:58:55.731131    6062 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0e50d449739b50c10d896114a62770403360d20b04d73d5f2d16615791dcb302
	* Nov 09 21:58:55 functional-20201109132758-342799 kubelet[6062]: W1109 21:58:55.731336    6062 status_manager.go:550] Failed to get status for pod "kube-apiserver-functional-20201109132758-342799_kube-system(f5ecdefc6b776519ca22189eb9472242)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:55 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:55.731911    6062 pod_workers.go:191] Error syncing pod f5ecdefc6b776519ca22189eb9472242 ("kube-apiserver-functional-20201109132758-342799_kube-system(f5ecdefc6b776519ca22189eb9472242)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-functional-20201109132758-342799_kube-system(f5ecdefc6b776519ca22189eb9472242)"
	* Nov 09 21:58:55 functional-20201109132758-342799 kubelet[6062]: W1109 21:58:55.808697    6062 status_manager.go:550] Failed to get status for pod "kube-scheduler-functional-20201109132758-342799_kube-system(ff7d12f9e4f14e202a85a7c5534a3129)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20201109132758-342799": dial tcp 192.168.49.16:8441: connect: connection refused
	* Nov 09 21:58:55 functional-20201109132758-342799 kubelet[6062]: E1109 21:58:55.851073    6062 event.go:273] Unable to write event: 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.16:8441: connect: connection refused' (may retry after sleeping)
	* 
	* ==> storage-provisioner [55a96eb3b002] <==
	* I1109 21:50:53.548097       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1109 21:51:12.286110       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1109 21:51:12.287203       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_functional-20201109132758-342799_25e58adb-07af-41d5-9451-984ece52a972!
	* I1109 21:51:12.287838       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d68bb4f9-c53d-4241-bef8-3cce70269a69", APIVersion:"v1", ResourceVersion:"1370", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20201109132758-342799_25e58adb-07af-41d5-9451-984ece52a972 became leader
	* I1109 21:51:12.387684       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_functional-20201109132758-342799_25e58adb-07af-41d5-9451-984ece52a972!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:58:54.924849  664298 logs.go:181] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.19.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	E1109 13:58:55.448142  664298 logs.go:181] command /bin/bash -c "docker logs --tail 25 1eb996e347f5" failed with error: /bin/bash -c "docker logs --tail 25 1eb996e347f5": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 1eb996e347f5
	 output: "\n** stderr ** \nError: No such container: 1eb996e347f5\n\n** /stderr **"
	! unable to fetch logs for: describe nodes, kube-apiserver [1eb996e347f5]

                                                
                                                
** /stderr **
helpers_test.go:243: failed logs error: exit status 110
--- FAIL: TestFunctional/parallel/DockerEnv (37.67s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (261.8s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20201109133858-342799 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p pause-20201109133858-342799 --alsologtostderr -v=1: (4m11.816072599s)
pause_test.go:94: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-20201109133858-342799] minikube v1.14.2 on Debian 9.13
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube
	  - MINIKUBE_LOCATION=9627
	* Using the docker driver based on existing profile
	
	
	* Starting control plane node pause-20201109133858-342799 in cluster pause-20201109133858-342799
	* Updating the running docker "pause-20201109133858-342799" container ...
	* Preparing Kubernetes v1.19.2 on Docker 19.03.13 ...
	* Verifying Kubernetes components...
	* Enabled addons: storage-provisioner, default-storageclass
	* Done! kubectl is now configured to use "pause-20201109133858-342799" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:41:53.435308  467185 out.go:185] Setting OutFile to fd 1 ...
	I1109 13:41:53.435763  467185 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1109 13:41:53.435771  467185 out.go:198] Setting ErrFile to fd 2...
	I1109 13:41:53.435778  467185 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1109 13:41:53.435962  467185 root.go:279] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/bin
	I1109 13:41:53.436340  467185 out.go:192] Setting JSON to false
	I1109 13:41:53.504557  467185 start.go:103] hostinfo: {"hostname":"kic-integration-slave8","uptime":5063,"bootTime":1604953050,"procs":383,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-14-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"ae41e7f6-8b8e-4d40-b77d-1ebb5a2d5fdb"}
	I1109 13:41:53.505532  467185 start.go:113] virtualization: kvm host
	I1109 13:41:53.508790  467185 out.go:110] * [pause-20201109133858-342799] minikube v1.14.2 on Debian 9.13
	I1109 13:41:53.512461  467185 out.go:110]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/kubeconfig
	I1109 13:41:53.515694  467185 out.go:110]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:41:53.519112  467185 out.go:110]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube
	I1109 13:41:53.522046  467185 out.go:110]   - MINIKUBE_LOCATION=9627
	I1109 13:41:53.522953  467185 driver.go:288] Setting default libvirt URI to qemu:///system
	I1109 13:41:53.607501  467185 docker.go:117] docker version: linux-19.03.13
	I1109 13:41:53.607715  467185 cli_runner.go:110] Run: docker system info --format "{{json .}}"
	I1109 13:41:53.745414  467185 info.go:253] docker info: {ID:F6IX:ZLDR:GSU5:57QV:GUUZ:QOCT:V5VG:5GRC:MXPB:2JZB:PBMT:ABFJ Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:802 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:true NGoroutines:100 SystemTime:2020-11-09 13:41:53.666314525 -0800 PST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-14-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAd
dress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628288000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kic-integration-slave8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warn
ings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1109 13:41:53.745573  467185 docker.go:147] overlay module found
	I1109 13:41:53.749727  467185 out.go:110] * Using the docker driver based on existing profile
	I1109 13:41:53.749769  467185 start.go:272] selected driver: docker
	I1109 13:41:53.749780  467185 start.go:680] validating driver "docker" against &{Name:pause-20201109133858-342799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8 Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:pause-20201109133858-342799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.82.16 Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true kubelet:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
	I1109 13:41:53.749907  467185 start.go:691] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
	I1109 13:41:53.750029  467185 cli_runner.go:110] Run: docker system info --format "{{json .}}"
	I1109 13:41:53.881764  467185 info.go:253] docker info: {ID:F6IX:ZLDR:GSU5:57QV:GUUZ:QOCT:V5VG:5GRC:MXPB:2JZB:PBMT:ABFJ Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:802 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:true NGoroutines:100 SystemTime:2020-11-09 13:41:53.811736437 -0800 PST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-14-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAd
dress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628288000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kic-integration-slave8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warn
ings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1109 13:41:53.889852  467185 out.go:110] 
	W1109 13:41:53.890107  467185 out.go:146] X Requested memory allocation (1800MB) is less than the recommended minimum 1907MB. Deployments may fail.
	X Requested memory allocation (1800MB) is less than the recommended minimum 1907MB. Deployments may fail.
	I1109 13:41:53.895731  467185 out.go:110] 
	I1109 13:41:53.895833  467185 start_flags.go:364] config:
	{Name:pause-20201109133858-342799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8 Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:pause-20201109133858-342799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12
ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.82.16 Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true kubelet:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
	I1109 13:41:53.898345  467185 out.go:110] * Starting control plane node pause-20201109133858-342799 in cluster pause-20201109133858-342799
	I1109 13:41:54.172195  467185 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8 in local docker daemon, skipping pull
	I1109 13:41:54.172233  467185 cache.go:115] gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8 exists in daemon, skipping pull
	I1109 13:41:54.172286  467185 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
	I1109 13:41:54.172340  467185 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
	I1109 13:41:54.172353  467185 cache.go:53] Caching tarball of preloaded images
	I1109 13:41:54.172370  467185 preload.go:131] Found /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 13:41:54.172384  467185 cache.go:56] Finished verifying existence of preloaded tar for  v1.19.2 on docker
	I1109 13:41:54.172505  467185 profile.go:150] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/config.json ...
	I1109 13:41:54.172723  467185 cache.go:182] Successfully downloaded all kic artifacts
	I1109 13:41:54.172762  467185 start.go:314] acquiring machines lock for pause-20201109133858-342799: {Name:mkc9df6d898b909f63ff89667f4abe35408213f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:41:54.173173  467185 start.go:318] acquired machines lock for "pause-20201109133858-342799" in 199.977µs
	I1109 13:41:54.173203  467185 start.go:94] Skipping create...Using existing machine configuration
	I1109 13:41:54.173210  467185 fix.go:54] fixHost starting: 
	I1109 13:41:54.173575  467185 cli_runner.go:110] Run: docker container inspect pause-20201109133858-342799 --format={{.State.Status}}
	I1109 13:41:54.233458  467185 fix.go:107] recreateIfNeeded on pause-20201109133858-342799: state=Running err=<nil>
	W1109 13:41:54.233534  467185 fix.go:133] unexpected machine state, will restart: <nil>
	I1109 13:41:54.240095  467185 out.go:110] * Updating the running docker "pause-20201109133858-342799" container ...
	I1109 13:41:54.240147  467185 machine.go:88] provisioning docker machine ...
	I1109 13:41:54.240176  467185 ubuntu.go:166] provisioning hostname "pause-20201109133858-342799"
	I1109 13:41:54.240252  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:41:54.313238  467185 main.go:119] libmachine: Using SSH client type: native
	I1109 13:41:54.313585  467185 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x8088c0] 0x808880 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1109 13:41:54.313620  467185 main.go:119] libmachine: About to run SSH command:
	sudo hostname pause-20201109133858-342799 && echo "pause-20201109133858-342799" | sudo tee /etc/hostname
	I1109 13:41:54.474378  467185 main.go:119] libmachine: SSH cmd err, output: <nil>: pause-20201109133858-342799
	
	I1109 13:41:54.474498  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:41:54.537550  467185 main.go:119] libmachine: Using SSH client type: native
	I1109 13:41:54.537970  467185 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x8088c0] 0x808880 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1109 13:41:54.538036  467185 main.go:119] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20201109133858-342799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20201109133858-342799/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20201109133858-342799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:41:54.674507  467185 main.go:119] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:41:54.674541  467185 ubuntu.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem Ser
verKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube}
	I1109 13:41:54.674572  467185 ubuntu.go:174] setting up certificates
	I1109 13:41:54.674585  467185 provision.go:82] configureAuth start
	I1109 13:41:54.674825  467185 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20201109133858-342799
	I1109 13:41:54.741031  467185 provision.go:131] copyHostCerts
	I1109 13:41:54.741121  467185 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/ca.pem, removing ...
	I1109 13:41:54.741204  467185 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/ca.pem (1082 bytes)
	I1109 13:41:54.741320  467185 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cert.pem, removing ...
	I1109 13:41:54.741403  467185 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cert.pem (1123 bytes)
	I1109 13:41:54.741497  467185 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/key.pem, removing ...
	I1109 13:41:54.741529  467185 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/key.pem (1679 bytes)
	I1109 13:41:54.741575  467185 provision.go:105] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/ca-key.pem org=jenkins.pause-20201109133858-342799 san=[192.168.82.16 localhost 127.0.0.1 minikube pause-20201109133858-342799]
	I1109 13:41:55.468337  467185 provision.go:159] copyRemoteCerts
	I1109 13:41:55.468413  467185 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:41:55.468454  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:41:55.530114  467185 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/pause-20201109133858-342799/id_rsa Username:docker}
	I1109 13:41:55.635635  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1109 13:41:55.675401  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 13:41:55.716216  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:41:55.754097  467185 provision.go:85] duration metric: configureAuth took 1.079495072s
	I1109 13:41:55.754132  467185 ubuntu.go:190] setting minikube options for container-runtime
	I1109 13:41:55.754436  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:41:55.838108  467185 main.go:119] libmachine: Using SSH client type: native
	I1109 13:41:55.838461  467185 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x8088c0] 0x808880 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1109 13:41:55.838493  467185 main.go:119] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 13:41:55.992210  467185 main.go:119] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 13:41:55.992244  467185 ubuntu.go:71] root file system type: overlay
	I1109 13:41:55.992655  467185 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 13:41:55.992758  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:41:56.068217  467185 main.go:119] libmachine: Using SSH client type: native
	I1109 13:41:56.068444  467185 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x8088c0] 0x808880 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1109 13:41:56.068576  467185 main.go:119] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	
	[Service]
	Type=notify
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 13:41:56.244815  467185 main.go:119] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	
	[Service]
	Type=notify
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP 
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 13:41:56.244965  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:41:56.320584  467185 main.go:119] libmachine: Using SSH client type: native
	I1109 13:41:56.320886  467185 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x8088c0] 0x808880 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I1109 13:41:56.320915  467185 main.go:119] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 13:41:56.490271  467185 main.go:119] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:41:56.490313  467185 machine.go:91] provisioned docker machine in 2.250156971s
	I1109 13:41:56.490330  467185 start.go:268] post-start starting for "pause-20201109133858-342799" (driver="docker")
	I1109 13:41:56.490341  467185 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:41:56.490422  467185 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:41:56.490480  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:41:56.554049  467185 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/pause-20201109133858-342799/id_rsa Username:docker}
	I1109 13:41:56.668493  467185 ssh_runner.go:148] Run: cat /etc/os-release
	I1109 13:41:56.674948  467185 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 13:41:56.674985  467185 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:41:56.675000  467185 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 13:41:56.675009  467185 info.go:97] Remote host: Ubuntu 20.04.1 LTS
	I1109 13:41:56.675027  467185 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/addons for local assets ...
	I1109 13:41:56.675100  467185 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/files for local assets ...
	I1109 13:41:56.675280  467185 filesync.go:141] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/files/etc/test/nested/copy/342799/hosts -> hosts in /etc/test/nested/copy/342799
	I1109 13:41:56.675340  467185 ssh_runner.go:148] Run: sudo mkdir -p /etc/test/nested/copy/342799
	I1109 13:41:56.687497  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/files/etc/test/nested/copy/342799/hosts --> /etc/test/nested/copy/342799/hosts (40 bytes)
	I1109 13:41:57.824151  467185 start.go:271] post-start completed in 1.333800355s
	I1109 13:41:57.824364  467185 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:41:57.824436  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:41:57.883008  467185 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/pause-20201109133858-342799/id_rsa Username:docker}
	I1109 13:41:57.977548  467185 fix.go:56] fixHost completed within 3.804330095s
	I1109 13:41:57.977586  467185 start.go:81] releasing machines lock for "pause-20201109133858-342799", held for 3.804394358s
	I1109 13:41:57.977692  467185 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20201109133858-342799
	I1109 13:41:58.039112  467185 ssh_runner.go:148] Run: systemctl --version
	I1109 13:41:58.039128  467185 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1109 13:41:58.039171  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:41:58.039216  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:41:58.101112  467185 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/pause-20201109133858-342799/id_rsa Username:docker}
	I1109 13:41:58.101983  467185 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/pause-20201109133858-342799/id_rsa Username:docker}
	I1109 13:41:58.234666  467185 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:41:58.254101  467185 ssh_runner.go:148] Run: sudo systemctl cat docker.service
	I1109 13:41:58.268238  467185 cruntime.go:193] skipping containerd shutdown because we are bound to it
	I1109 13:41:58.268309  467185 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
	I1109 13:41:58.283053  467185 ssh_runner.go:148] Run: sudo systemctl cat docker.service
	I1109 13:41:58.297268  467185 ssh_runner.go:148] Run: sudo systemctl daemon-reload
	I1109 13:41:58.881166  467185 ssh_runner.go:148] Run: sudo systemctl start docker
	I1109 13:42:02.377915  467185 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
	I1109 13:42:06.244958  467185 out.go:110] * Preparing Kubernetes v1.19.2 on Docker 19.03.13 ...
	I1109 13:42:06.245248  467185 cli_runner.go:110] Run: docker network inspect pause-20201109133858-342799 --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}"
	I1109 13:42:06.299618  467185 ssh_runner.go:148] Run: grep 192.168.82.1	host.minikube.internal$ /etc/hosts
	I1109 13:42:06.304817  467185 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
	I1109 13:42:06.304858  467185 preload.go:105] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
	I1109 13:42:06.304903  467185 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 13:42:06.358807  467185 docker.go:381] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.19.2
	k8s.gcr.io/kube-apiserver:v1.19.2
	k8s.gcr.io/kube-controller-manager:v1.19.2
	k8s.gcr.io/kube-scheduler:v1.19.2
	minikube-local-cache-test:functional-20201109132758-342799
	gcr.io/k8s-minikube/storage-provisioner:v3
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/dashboard:v2.0.3
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I1109 13:42:06.358843  467185 docker.go:386] docker.io/kubernetesui/dashboard:v2.0.3 wasn't preloaded
	I1109 13:42:06.358941  467185 ssh_runner.go:148] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1109 13:42:06.531988  467185 ssh_runner.go:148] Run: which lz4
	I1109 13:42:06.536630  467185 ssh_runner.go:148] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1109 13:42:06.541410  467185 ssh_runner.go:205] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I1109 13:42:06.541450  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (509957718 bytes)
	I1109 13:42:44.904987  467185 docker.go:347] Took 38.368442 seconds to copy over tarball
	I1109 13:42:46.983607  467185 ssh_runner.go:148] Run: sudo tar -I lz4 -C /var -xvf /preloaded.tar.lz4
	I1109 13:45:20.684402  467185 ssh_runner.go:188] Completed: sudo tar -I lz4 -C /var -xvf /preloaded.tar.lz4: (2m33.700731687s)
	I1109 13:45:20.684450  467185 ssh_runner.go:99] rm: /preloaded.tar.lz4
	I1109 13:45:20.723456  467185 ssh_runner.go:148] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1109 13:45:20.735848  467185 ssh_runner.go:215] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3290 bytes)
	I1109 13:45:20.758231  467185 ssh_runner.go:148] Run: sudo systemctl daemon-reload
	I1109 13:45:20.951283  467185 ssh_runner.go:148] Run: sudo systemctl restart docker
	I1109 13:45:35.734634  467185 ssh_runner.go:188] Completed: sudo systemctl restart docker: (14.783303691s)
	I1109 13:45:35.734731  467185 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 13:45:35.818600  467185 docker.go:381] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.19.2
	k8s.gcr.io/kube-apiserver:v1.19.2
	k8s.gcr.io/kube-controller-manager:v1.19.2
	k8s.gcr.io/kube-scheduler:v1.19.2
	minikube-local-cache-test:functional-20201109132758-342799
	gcr.io/k8s-minikube/storage-provisioner:v3
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/dashboard:v2.0.3
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I1109 13:45:35.818626  467185 docker.go:386] docker.io/kubernetesui/dashboard:v2.0.3 wasn't preloaded
	I1109 13:45:35.818637  467185 cache_images.go:77] LoadImages start: [k8s.gcr.io/kube-proxy:v1.19.2 k8s.gcr.io/kube-scheduler:v1.19.2 k8s.gcr.io/kube-controller-manager:v1.19.2 k8s.gcr.io/kube-apiserver:v1.19.2 k8s.gcr.io/coredns:1.7.0 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/pause:3.2 gcr.io/k8s-minikube/storage-provisioner:v3 docker.io/kubernetesui/dashboard:v2.0.3 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I1109 13:45:35.822149  467185 image.go:168] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I1109 13:45:35.822233  467185 image.go:168] retrieving image: k8s.gcr.io/kube-scheduler:v1.19.2
	I1109 13:45:35.822246  467185 image.go:168] retrieving image: k8s.gcr.io/kube-controller-manager:v1.19.2
	I1109 13:45:35.822149  467185 image.go:168] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I1109 13:45:35.822477  467185 image.go:168] retrieving image: k8s.gcr.io/kube-apiserver:v1.19.2
	I1109 13:45:35.822511  467185 image.go:168] retrieving image: k8s.gcr.io/kube-proxy:v1.19.2
	I1109 13:45:35.822530  467185 image.go:168] retrieving image: k8s.gcr.io/coredns:1.7.0
	I1109 13:45:35.822655  467185 image.go:168] retrieving image: k8s.gcr.io/pause:3.2
	I1109 13:45:35.822702  467185 image.go:168] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v3
	I1109 13:45:35.822935  467185 image.go:168] retrieving image: docker.io/kubernetesui/dashboard:v2.0.3
	I1109 13:45:35.823512  467185 image.go:176] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.19.2: Error response from daemon: reference does not exist
	I1109 13:45:35.823577  467185 image.go:176] daemon lookup for k8s.gcr.io/etcd:3.4.13-0: Error response from daemon: reference does not exist
	I1109 13:45:35.823592  467185 image.go:176] daemon lookup for k8s.gcr.io/kube-proxy:v1.19.2: Error response from daemon: reference does not exist
	I1109 13:45:35.823762  467185 image.go:176] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4: Error response from daemon: reference does not exist
	I1109 13:45:35.823861  467185 image.go:176] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
	I1109 13:45:35.823871  467185 image.go:176] daemon lookup for docker.io/kubernetesui/dashboard:v2.0.3: Error response from daemon: reference does not exist
	I1109 13:45:35.823925  467185 image.go:176] daemon lookup for k8s.gcr.io/kube-scheduler:v1.19.2: Error response from daemon: reference does not exist
	I1109 13:45:35.823925  467185 image.go:176] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v3: Error response from daemon: reference does not exist
	I1109 13:45:35.823994  467185 image.go:176] daemon lookup for k8s.gcr.io/coredns:1.7.0: Error response from daemon: reference does not exist
	I1109 13:45:35.824156  467185 image.go:176] daemon lookup for k8s.gcr.io/kube-apiserver:v1.19.2: Error response from daemon: reference does not exist
	I1109 13:45:36.108609  467185 ssh_runner.go:148] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v3
	I1109 13:45:36.109674  467185 ssh_runner.go:148] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.19.2
	I1109 13:45:36.137458  467185 ssh_runner.go:148] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.7.0
	I1109 13:45:36.138636  467185 ssh_runner.go:148] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.19.2
	I1109 13:45:36.139636  467185 ssh_runner.go:148] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.19.2
	I1109 13:45:36.139751  467185 ssh_runner.go:148] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.13-0
	I1109 13:45:36.140524  467185 ssh_runner.go:148] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.2
	I1109 13:45:36.142657  467185 ssh_runner.go:148] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.19.2
	I1109 13:45:36.208814  467185 ssh_runner.go:148] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/metrics-scraper:v1.0.4
	I1109 13:45:36.328259  467185 ssh_runner.go:148] Run: docker image inspect --format {{.Id}} docker.io/kubernetesui/dashboard:v2.0.3
	I1109 13:45:36.499518  467185 cache_images.go:112] Successfully loaded all cached images
	I1109 13:45:36.499545  467185 cache_images.go:81] LoadImages completed in 680.89058ms
	I1109 13:45:36.499640  467185 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
	I1109 13:45:36.620831  467185 cni.go:74] Creating CNI manager for ""
	I1109 13:45:36.620866  467185 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
	I1109 13:45:36.620876  467185 kubeadm.go:84] Using pod CIDR: 
	I1109 13:45:36.620895  467185 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.82.16 APIServerPort:8443 KubernetesVersion:v1.19.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20201109133858-342799 NodeName:pause-20201109133858-342799 DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.82.16"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.82.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifes
ts ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1109 13:45:36.621067  467185 kubeadm.go:154] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.82.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "pause-20201109133858-342799"
	  kubeletExtraArgs:
	    node-ip: 192.168.82.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.82.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.19.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: ""
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: ""
	metricsBindAddress: 192.168.82.16:10249
	
	I1109 13:45:36.621164  467185 kubeadm.go:822] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.19.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=pause-20201109133858-342799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.82.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.19.2 ClusterName:pause-20201109133858-342799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 13:45:36.621233  467185 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.2
	I1109 13:45:36.633670  467185 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 13:45:36.633757  467185 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 13:45:36.645541  467185 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1109 13:45:36.665229  467185 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
	I1109 13:45:36.684816  467185 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1810 bytes)
	I1109 13:45:36.704979  467185 ssh_runner.go:148] Run: grep 192.168.82.16	control-plane.minikube.internal$ /etc/hosts
	I1109 13:45:36.711206  467185 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799 for IP: 192.168.82.16
	I1109 13:45:36.711291  467185 certs.go:169] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/ca.key
	I1109 13:45:36.711316  467185 certs.go:169] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/proxy-client-ca.key
	I1109 13:45:36.711483  467185 certs.go:269] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/client.key
	I1109 13:45:36.711517  467185 certs.go:269] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/apiserver.key.c32c81d3
	I1109 13:45:36.711543  467185 certs.go:269] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/proxy-client.key
	I1109 13:45:36.711711  467185 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/342799.pem (1338 bytes)
	W1109 13:45:36.711788  467185 certs.go:344] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/342799_empty.pem, impossibly tiny 0 bytes
	I1109 13:45:36.711811  467185 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:45:36.711859  467185 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:45:36.711906  467185 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:45:36.712088  467185 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/key.pem (1679 bytes)
	I1109 13:45:36.713641  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 13:45:36.741949  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 13:45:36.769295  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:45:36.821486  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 13:45:36.857926  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:45:36.894244  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 13:45:36.927682  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:45:36.964282  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 13:45:36.998805  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/certs/342799.pem --> /usr/share/ca-certificates/342799.pem (1338 bytes)
	I1109 13:45:37.030991  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:45:37.056915  467185 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
	I1109 13:45:37.080698  467185 ssh_runner.go:148] Run: openssl version
	I1109 13:45:37.088486  467185 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/342799.pem && ln -fs /usr/share/ca-certificates/342799.pem /etc/ssl/certs/342799.pem"
	I1109 13:45:37.102051  467185 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/342799.pem
	I1109 13:45:37.107844  467185 certs.go:389] hashing: -rw-r--r-- 1 root root 1338 Nov  9 21:27 /usr/share/ca-certificates/342799.pem
	I1109 13:45:37.107943  467185 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/342799.pem
	I1109 13:45:37.117184  467185 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/342799.pem /etc/ssl/certs/51391683.0"
	I1109 13:45:37.128918  467185 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:45:37.145596  467185 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:45:37.151077  467185 certs.go:389] hashing: -rw-r--r-- 1 root root 1111 Nov  9 21:21 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:45:37.151175  467185 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:45:37.160868  467185 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:45:37.175072  467185 kubeadm.go:324] StartCluster: {Name:pause-20201109133858-342799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8 Memory:1800 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:pause-20201109133858-342799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.82.16 Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true kubelet:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
	I1109 13:45:37.175235  467185 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 13:45:37.254516  467185 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:45:37.273396  467185 kubeadm.go:335] found existing configuration files, will attempt cluster restart
	I1109 13:45:37.273424  467185 kubeadm.go:527] restartCluster start
	I1109 13:45:37.273488  467185 ssh_runner.go:148] Run: sudo test -d /data/minikube
	I1109 13:45:37.286115  467185 kubeadm.go:122] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:45:37.287635  467185 kubeconfig.go:93] found "pause-20201109133858-342799" server: "https://192.168.82.16:8443"
	I1109 13:45:37.288684  467185 kapi.go:59] client config for pause-20201109133858-342799: &rest.Config{Host:"https://192.168.82.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/client.key",
CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17704c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I1109 13:45:37.291708  467185 ssh_runner.go:148] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 13:45:37.352070  467185 api_server.go:146] Checking apiserver status ...
	I1109 13:45:37.352154  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 13:45:37.377375  467185 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:45:37.377407  467185 kubeadm.go:506] needs reconfigure: apiserver in state Stopped
	I1109 13:45:37.377421  467185 kubeadm.go:945] stopping kube-system containers ...
	I1109 13:45:37.377496  467185 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 13:45:37.444645  467185 docker.go:229] Stopping containers: [1834a7dec7a3 50e54b941795 b3b67c2eed1c a8205b634d16 0403123a067b 9af6016feab9 227da54d6a6f 100b58909b87 b608f3b39021 0e411220a11d 4c4a2bbf2224 5637e8c68b52 4d15c72bacbc 64ad079da229]
	I1109 13:45:37.444725  467185 ssh_runner.go:148] Run: docker stop 1834a7dec7a3 50e54b941795 b3b67c2eed1c a8205b634d16 0403123a067b 9af6016feab9 227da54d6a6f 100b58909b87 b608f3b39021 0e411220a11d 4c4a2bbf2224 5637e8c68b52 4d15c72bacbc 64ad079da229
	I1109 13:45:37.520033  467185 ssh_runner.go:148] Run: sudo systemctl stop kubelet
	I1109 13:45:37.551307  467185 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 13:45:37.563223  467185 kubeadm.go:150] found existing configuration files:
	-rw------- 1 root root 5611 Nov  9 21:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5629 Nov  9 21:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2047 Nov  9 21:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5581 Nov  9 21:40 /etc/kubernetes/scheduler.conf
	
	I1109 13:45:37.563304  467185 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 13:45:37.577191  467185 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 13:45:37.590829  467185 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 13:45:37.605048  467185 kubeadm.go:161] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:45:37.605128  467185 ssh_runner.go:148] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 13:45:37.615559  467185 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 13:45:37.626107  467185 kubeadm.go:161] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:45:37.626182  467185 ssh_runner.go:148] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 13:45:37.637113  467185 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 13:45:37.648260  467185 kubeadm.go:603] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1109 13:45:37.648298  467185 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:45:39.467078  467185 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": (1.818749718s)
	I1109 13:45:39.467131  467185 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:45:40.688963  467185 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.221806772s)
	I1109 13:45:40.689004  467185 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:45:41.123680  467185 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:45:41.339650  467185 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:45:41.575501  467185 api_server.go:48] waiting for apiserver process to appear ...
	I1109 13:45:41.575581  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:42.095215  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:42.595295  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:43.095313  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:43.595250  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:44.095259  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:44.595280  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:45.095220  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:45.595231  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:46.095223  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:46.595222  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:47.095240  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:47.595239  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:48.095323  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:48.595211  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:49.095212  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:45:49.292584  467185 api_server.go:68] duration metric: took 7.717081393s to wait for apiserver process to appear ...
	I1109 13:45:49.292618  467185 api_server.go:84] waiting for apiserver healthz status ...
	I1109 13:45:49.292632  467185 api_server.go:221] Checking apiserver healthz at https://192.168.82.16:8443/healthz ...
	I1109 13:45:49.293012  467185 api_server.go:231] stopped: https://192.168.82.16:8443/healthz: Get "https://192.168.82.16:8443/healthz": dial tcp 192.168.82.16:8443: connect: connection refused
	I1109 13:45:49.793274  467185 api_server.go:221] Checking apiserver healthz at https://192.168.82.16:8443/healthz ...
	I1109 13:45:57.494112  467185 api_server.go:241] https://192.168.82.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 13:45:57.494144  467185 api_server.go:99] status: https://192.168.82.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 13:45:57.793371  467185 api_server.go:221] Checking apiserver healthz at https://192.168.82.16:8443/healthz ...
	I1109 13:45:57.861402  467185 api_server.go:241] https://192.168.82.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1109 13:45:57.861438  467185 api_server.go:99] status: https://192.168.82.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1109 13:45:58.293276  467185 api_server.go:221] Checking apiserver healthz at https://192.168.82.16:8443/healthz ...
	I1109 13:45:58.305850  467185 api_server.go:241] https://192.168.82.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1109 13:45:58.305884  467185 api_server.go:99] status: https://192.168.82.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1109 13:45:58.793255  467185 api_server.go:221] Checking apiserver healthz at https://192.168.82.16:8443/healthz ...
	I1109 13:45:58.808153  467185 api_server.go:241] https://192.168.82.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1109 13:45:58.808187  467185 api_server.go:99] status: https://192.168.82.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1109 13:45:59.293396  467185 api_server.go:221] Checking apiserver healthz at https://192.168.82.16:8443/healthz ...
	I1109 13:45:59.300463  467185 api_server.go:241] https://192.168.82.16:8443/healthz returned 200:
	ok
	I1109 13:45:59.309828  467185 api_server.go:137] control plane version: v1.19.2
	I1109 13:45:59.309866  467185 api_server.go:127] duration metric: took 10.017238437s to wait for apiserver health ...
	I1109 13:45:59.309882  467185 cni.go:74] Creating CNI manager for ""
	I1109 13:45:59.309898  467185 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
	I1109 13:45:59.309908  467185 system_pods.go:41] waiting for kube-system pods to appear ...
	I1109 13:45:59.330360  467185 system_pods.go:57] 6 kube-system pods found
	I1109 13:45:59.330404  467185 system_pods.go:59] "coredns-f9fd979d6-mzdxp" [d5876a87-498a-443f-abe3-db2bf0fd7e31] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:45:59.330411  467185 system_pods.go:59] "etcd-pause-20201109133858-342799" [a61c86e8-9514-4e22-adc9-b61173f06654] Running
	I1109 13:45:59.330419  467185 system_pods.go:59] "kube-apiserver-pause-20201109133858-342799" [a1b30a2f-3a5f-48e9-b24d-982efae08fba] Running
	I1109 13:45:59.330424  467185 system_pods.go:59] "kube-controller-manager-pause-20201109133858-342799" [e52ca962-961b-419a-9c5c-fc57ad1be218] Running
	I1109 13:45:59.330431  467185 system_pods.go:59] "kube-proxy-zsvph" [262744fc-0400-43be-b5e5-16316d67e22c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 13:45:59.330443  467185 system_pods.go:59] "kube-scheduler-pause-20201109133858-342799" [2db9ddc3-493e-4172-9b76-6ee4bc3889a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 13:45:59.330455  467185 system_pods.go:72] duration metric: took 20.538857ms to wait for pod list to return data ...
	I1109 13:45:59.330466  467185 node_conditions.go:101] verifying NodePressure condition ...
	I1109 13:45:59.337394  467185 node_conditions.go:121] node storage ephemeral capacity is 515928484Ki
	I1109 13:45:59.337462  467185 node_conditions.go:122] node cpu capacity is 8
	I1109 13:45:59.337489  467185 node_conditions.go:104] duration metric: took 7.015319ms to run NodePressure ...
	I1109 13:45:59.337522  467185 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:46:00.306193  467185 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 13:46:00.365155  467185 ops.go:34] apiserver oom_adj: -16
	I1109 13:46:00.365196  467185 kubeadm.go:531] restartCluster took 23.091763898s
	I1109 13:46:00.365211  467185 kubeadm.go:326] StartCluster complete in 23.190151183s
	I1109 13:46:00.365241  467185 settings.go:127] acquiring lock: {Name:mkaf3940a97c224b7c147d7635a33a635839aae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:46:00.365452  467185 settings.go:135] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/kubeconfig
	I1109 13:46:00.367063  467185 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/kubeconfig: {Name:mk2f275b8729a3b6c24b0f7b0d3bbc78fecc3fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:46:00.371036  467185 start.go:198] Will wait 6m0s for node up to 
	I1109 13:46:00.380137  467185 out.go:110] * Verifying Kubernetes components...
	I1109 13:46:00.380245  467185 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:46:00.371242  467185 cache.go:92] acquiring lock: {Name:mkbd5644f58c5aa1ead54ccc0e0eb9ca7a258760 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:46:00.380508  467185 cache.go:100] /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cache/images/minikube-local-cache-test_functional-20201109132758-342799 exists
	I1109 13:46:00.380665  467185 cache.go:81] cache image "minikube-local-cache-test:functional-20201109132758-342799" -> "/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cache/images/minikube-local-cache-test_functional-20201109132758-342799" took 9.439111ms
	I1109 13:46:00.380701  467185 cache.go:66] save to tar file minikube-local-cache-test:functional-20201109132758-342799 -> /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cache/images/minikube-local-cache-test_functional-20201109132758-342799 succeeded
	I1109 13:46:00.380717  467185 cache.go:73] Successfully saved all images to host disk.
	I1109 13:46:00.371373  467185 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl scale deployment --replicas=1 coredns -n=kube-system
	I1109 13:46:00.380925  467185 cli_runner.go:110] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
	I1109 13:46:00.371392  467185 addons.go:371] enableAddons start: toEnable=map[], additional=[]
	I1109 13:46:00.381053  467185 addons.go:55] Setting storage-provisioner=true in profile "pause-20201109133858-342799"
	I1109 13:46:00.381081  467185 addons.go:131] Setting addon storage-provisioner=true in "pause-20201109133858-342799"
	W1109 13:46:00.381095  467185 addons.go:140] addon storage-provisioner should already be in state true
	I1109 13:46:00.381116  467185 host.go:66] Checking if "pause-20201109133858-342799" exists ...
	I1109 13:46:00.381941  467185 cli_runner.go:110] Run: docker container inspect pause-20201109133858-342799 --format={{.State.Status}}
	I1109 13:46:00.382733  467185 addons.go:55] Setting default-storageclass=true in profile "pause-20201109133858-342799"
	I1109 13:46:00.382761  467185 addons.go:274] enableOrDisableStorageClasses default-storageclass=true on "pause-20201109133858-342799"
	I1109 13:46:00.383218  467185 cli_runner.go:110] Run: docker container inspect pause-20201109133858-342799 --format={{.State.Status}}
	I1109 13:46:00.456104  467185 api_server.go:48] waiting for apiserver process to appear ...
	I1109 13:46:00.456178  467185 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:46:00.475709  467185 addons.go:243] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:46:00.475737  467185 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 13:46:00.475807  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:46:00.477367  467185 cli_runner.go:110] Run: docker container inspect functional-20201109132758-342799 --format={{.State.Status}}
	I1109 13:46:00.534632  467185 kapi.go:59] client config for pause-20201109133858-342799: &rest.Config{Host:"https://192.168.82.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/pause-20201109133858-342799/client.key",
CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17704c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
	I1109 13:46:00.544332  467185 addons.go:131] Setting addon default-storageclass=true in "pause-20201109133858-342799"
	W1109 13:46:00.544373  467185 addons.go:140] addon default-storageclass should already be in state true
	I1109 13:46:00.544401  467185 host.go:66] Checking if "pause-20201109133858-342799" exists ...
	I1109 13:46:00.544987  467185 cli_runner.go:110] Run: docker container inspect pause-20201109133858-342799 --format={{.State.Status}}
	I1109 13:46:00.587004  467185 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 13:46:00.587119  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20201109132758-342799
	I1109 13:46:00.588707  467185 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/pause-20201109133858-342799/id_rsa Username:docker}
	I1109 13:46:00.628459  467185 addons.go:243] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 13:46:00.628495  467185 ssh_runner.go:215] scp deploy/addons/storageclass/storageclass.yaml.tmpl --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 13:46:00.628572  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:46:00.667250  467185 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/functional-20201109132758-342799/id_rsa Username:docker}
	I1109 13:46:00.696346  467185 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/pause-20201109133858-342799/id_rsa Username:docker}
	I1109 13:46:00.726020  467185 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:46:00.839408  467185 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 13:46:03.207300  467185 ssh_runner.go:188] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.751101493s)
	I1109 13:46:03.207337  467185 api_server.go:68] duration metric: took 2.836252301s to wait for apiserver process to appear ...
	I1109 13:46:03.207354  467185 api_server.go:84] waiting for apiserver healthz status ...
	I1109 13:46:03.207369  467185 api_server.go:221] Checking apiserver healthz at https://192.168.82.16:8443/healthz ...
	I1109 13:46:03.207615  467185 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl scale deployment --replicas=1 coredns -n=kube-system: (2.826682804s)
	I1109 13:46:03.207635  467185 start.go:553] successfully scaled coredns replicas to 1
	I1109 13:46:03.260517  467185 api_server.go:241] https://192.168.82.16:8443/healthz returned 200:
	ok
	I1109 13:46:03.261833  467185 api_server.go:137] control plane version: v1.19.2
	I1109 13:46:03.261859  467185 api_server.go:127] duration metric: took 54.496098ms to wait for apiserver health ...
	I1109 13:46:03.261873  467185 system_pods.go:41] waiting for kube-system pods to appear ...
	I1109 13:46:03.286237  467185 system_pods.go:57] 6 kube-system pods found
	I1109 13:46:03.286271  467185 system_pods.go:59] "coredns-f9fd979d6-mzdxp" [d5876a87-498a-443f-abe3-db2bf0fd7e31] Running
	I1109 13:46:03.286287  467185 system_pods.go:59] "etcd-pause-20201109133858-342799" [a61c86e8-9514-4e22-adc9-b61173f06654] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 13:46:03.286295  467185 system_pods.go:59] "kube-apiserver-pause-20201109133858-342799" [a1b30a2f-3a5f-48e9-b24d-982efae08fba] Running
	I1109 13:46:03.286311  467185 system_pods.go:59] "kube-controller-manager-pause-20201109133858-342799" [e52ca962-961b-419a-9c5c-fc57ad1be218] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 13:46:03.286320  467185 system_pods.go:59] "kube-proxy-zsvph" [262744fc-0400-43be-b5e5-16316d67e22c] Running
	I1109 13:46:03.286331  467185 system_pods.go:59] "kube-scheduler-pause-20201109133858-342799" [2db9ddc3-493e-4172-9b76-6ee4bc3889a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 13:46:03.286339  467185 system_pods.go:72] duration metric: took 24.459805ms to wait for pod list to return data ...
	I1109 13:46:03.286352  467185 default_sa.go:33] waiting for default service account to be created ...
	I1109 13:46:03.291251  467185 default_sa.go:44] found service account: "default"
	I1109 13:46:03.291295  467185 default_sa.go:54] duration metric: took 4.935496ms for default service account to be created ...
	I1109 13:46:03.291310  467185 system_pods.go:114] waiting for k8s-apps to be running ...
	I1109 13:46:03.297653  467185 system_pods.go:84] 6 kube-system pods found
	I1109 13:46:03.297697  467185 system_pods.go:87] "coredns-f9fd979d6-mzdxp" [d5876a87-498a-443f-abe3-db2bf0fd7e31] Running
	I1109 13:46:03.297716  467185 system_pods.go:87] "etcd-pause-20201109133858-342799" [a61c86e8-9514-4e22-adc9-b61173f06654] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 13:46:03.297727  467185 system_pods.go:87] "kube-apiserver-pause-20201109133858-342799" [a1b30a2f-3a5f-48e9-b24d-982efae08fba] Running
	I1109 13:46:03.297747  467185 system_pods.go:87] "kube-controller-manager-pause-20201109133858-342799" [e52ca962-961b-419a-9c5c-fc57ad1be218] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 13:46:03.297794  467185 system_pods.go:87] "kube-proxy-zsvph" [262744fc-0400-43be-b5e5-16316d67e22c] Running
	I1109 13:46:03.297806  467185 system_pods.go:87] "kube-scheduler-pause-20201109133858-342799" [2db9ddc3-493e-4172-9b76-6ee4bc3889a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 13:46:03.297816  467185 system_pods.go:124] duration metric: took 6.49787ms to wait for k8s-apps to be running ...
	I1109 13:46:03.297829  467185 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 13:46:03.297890  467185 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:46:03.592715  467185 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.866632841s)
	I1109 13:46:03.592837  467185 ssh_runner.go:188] Completed: docker images --format {{.Repository}}:{{.Tag}}: (3.005802496s)
	I1109 13:46:03.592870  467185 docker.go:381] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.19.2
	k8s.gcr.io/kube-apiserver:v1.19.2
	k8s.gcr.io/kube-controller-manager:v1.19.2
	k8s.gcr.io/kube-scheduler:v1.19.2
	minikube-local-cache-test:functional-20201109132758-342799
	gcr.io/k8s-minikube/storage-provisioner:v3
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/dashboard:v2.0.3
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.3
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I1109 13:46:03.592880  467185 cache_images.go:74] Images are preloaded, skipping loading
	I1109 13:46:03.593104  467185 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.753659568s)
	I1109 13:46:03.599559  467185 out.go:110] * Enabled addons: storage-provisioner, default-storageclass
	I1109 13:46:03.593512  467185 system_svc.go:56] duration metric: took 295.677774ms WaitForService to wait for kubelet.
	I1109 13:46:03.599605  467185 addons.go:373] enableAddons completed in 3.228217458s
	I1109 13:46:03.599642  467185 kubeadm.go:474] duration metric: took 3.22854874s to wait for : map[apiserver:true apps_running:true default_sa:true kubelet:true system_pods:true] ...
	I1109 13:46:03.599700  467185 node_conditions.go:101] verifying NodePressure condition ...
	I1109 13:46:03.593639  467185 cli_runner.go:110] Run: docker container inspect missing-upgrade-20201109134358-342799 --format={{.State.Status}}
	I1109 13:46:03.603973  467185 node_conditions.go:121] node storage ephemeral capacity is 515928484Ki
	I1109 13:46:03.604008  467185 node_conditions.go:122] node cpu capacity is 8
	I1109 13:46:03.604026  467185 node_conditions.go:104] duration metric: took 4.314478ms to run NodePressure ...
	I1109 13:46:03.604042  467185 start.go:203] waiting for startup goroutines ...
	W1109 13:46:03.659717  467185 cli_runner.go:148] docker container inspect missing-upgrade-20201109134358-342799 --format={{.State.Status}} returned with exit code 1
	W1109 13:46:03.659798  467185 cache_images.go:201] error getting status for missing-upgrade-20201109134358-342799: state: unknown state "missing-upgrade-20201109134358-342799": docker container inspect missing-upgrade-20201109134358-342799 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20201109134358-342799
	I1109 13:46:03.660458  467185 cli_runner.go:110] Run: docker container inspect old-k8s-version-20201109134552-342799 --format={{.State.Status}}
	I1109 13:46:03.726301  467185 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 13:46:03.726364  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20201109134552-342799
	I1109 13:46:03.786859  467185 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/old-k8s-version-20201109134552-342799/id_rsa Username:docker}
	I1109 13:46:03.955103  467185 docker.go:381] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v3
	kubernetesui/dashboard:v2.0.3
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/kube-proxy:v1.13.0
	k8s.gcr.io/kube-apiserver:v1.13.0
	k8s.gcr.io/kube-scheduler:v1.13.0
	k8s.gcr.io/kube-controller-manager:v1.13.0
	k8s.gcr.io/coredns:1.2.6
	k8s.gcr.io/etcd:3.2.24
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1109 13:46:03.955131  467185 docker.go:386] minikube-local-cache-test:functional-20201109132758-342799 wasn't preloaded
	I1109 13:46:03.955149  467185 cache_images.go:77] LoadImages start: [minikube-local-cache-test:functional-20201109132758-342799]
	I1109 13:46:03.960090  467185 ssh_runner.go:148] Run: docker image inspect --format {{.Id}} minikube-local-cache-test:functional-20201109132758-342799
	I1109 13:46:04.020980  467185 cache_images.go:105] "minikube-local-cache-test:functional-20201109132758-342799" needs transfer: "minikube-local-cache-test:functional-20201109132758-342799" does not exist at hash "sha256:0a709a27d4dd7a776e9dce357a8ba9ef3c681ff6cc6470ea19976bffb8e0372f" in container runtime
	I1109 13:46:04.021015  467185 cache_images.go:241] Loading image from cache: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cache/images/minikube-local-cache-test_functional-20201109132758-342799
	I1109 13:46:04.021119  467185 ssh_runner.go:148] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20201109132758-342799
	I1109 13:46:04.025853  467185 ssh_runner.go:205] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20201109132758-342799: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20201109132758-342799: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20201109132758-342799': No such file or directory
	I1109 13:46:04.025896  467185 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cache/images/minikube-local-cache-test_functional-20201109132758-342799 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20201109132758-342799 (5120 bytes)
	I1109 13:46:04.052662  467185 docker.go:152] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20201109132758-342799
	I1109 13:46:04.052734  467185 ssh_runner.go:148] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20201109132758-342799
	I1109 13:46:04.420432  467185 cache_images.go:263] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/cache/images/minikube-local-cache-test_functional-20201109132758-342799 from cache
	I1109 13:46:04.420468  467185 cache_images.go:112] Successfully loaded all cached images
	I1109 13:46:04.420480  467185 cache_images.go:81] LoadImages completed in 465.318683ms
	I1109 13:46:04.421049  467185 cli_runner.go:110] Run: docker container inspect pause-20201109133858-342799 --format={{.State.Status}}
	I1109 13:46:04.481563  467185 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 13:46:04.481636  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20201109133858-342799
	I1109 13:46:04.548999  467185 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/pause-20201109133858-342799/id_rsa Username:docker}
	I1109 13:46:04.720432  467185 docker.go:381] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.19.2
	k8s.gcr.io/kube-apiserver:v1.19.2
	k8s.gcr.io/kube-controller-manager:v1.19.2
	k8s.gcr.io/kube-scheduler:v1.19.2
	minikube-local-cache-test:functional-20201109132758-342799
	gcr.io/k8s-minikube/storage-provisioner:v3
	k8s.gcr.io/etcd:3.4.13-0
	kubernetesui/dashboard:v2.0.3
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I1109 13:46:04.720464  467185 cache_images.go:74] Images are preloaded, skipping loading
	I1109 13:46:04.721013  467185 cli_runner.go:110] Run: docker container inspect stopped-upgrade-20201109134422-342799 --format={{.State.Status}}
	I1109 13:46:04.792183  467185 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 13:46:04.792240  467185 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20201109134422-342799
	I1109 13:46:04.876684  467185 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/stopped-upgrade-20201109134422-342799/id_rsa Username:docker}
	I1109 13:46:05.047139  467185 docker.go:381] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20201109132758-342799
	gcr.io/k8s-minikube/storage-provisioner:v3
	kubernetesui/dashboard:v2.0.3
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/kube-proxy:v1.17.3
	k8s.gcr.io/kube-controller-manager:v1.17.3
	k8s.gcr.io/kube-apiserver:v1.17.3
	k8s.gcr.io/kube-scheduler:v1.17.3
	kubernetesui/dashboard:v2.0.0-beta8
	k8s.gcr.io/coredns:1.6.5
	kindest/kindnetd:0.5.3
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	k8s.gcr.io/pause:3.1
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	
	-- /stdout --
	I1109 13:46:05.047173  467185 cache_images.go:74] Images are preloaded, skipping loading
	I1109 13:46:05.047190  467185 cache_images.go:227] succeeded pushing to: functional-20201109132758-342799 old-k8s-version-20201109134552-342799 pause-20201109133858-342799 stopped-upgrade-20201109134422-342799
	I1109 13:46:05.047198  467185 cache_images.go:228] failed pushing to: missing-upgrade-20201109134358-342799
	I1109 13:46:05.151661  467185 start.go:461] kubectl: 1.19.3, cluster: 1.19.2 (minor skew: 0)
	I1109 13:46:05.159631  467185 out.go:110] * Done! kubectl is now configured to use "pause-20201109133858-342799" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect pause-20201109133858-342799
helpers_test.go:229: (dbg) docker inspect pause-20201109133858-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab",
	        "Created": "2020-11-09T21:40:09.542396209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 447056,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:40:10.202530718Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab/hosts",
	        "LogPath": "/var/lib/docker/containers/5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab/5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab-json.log",
	        "Name": "/pause-20201109133858-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20201109133858-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20201109133858-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 1887436800,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/921239716d8836596addf0546e2b6a0643392eb82471f605adf5b3c8c32ed8f4-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/921239716d8836596addf0546e2b6a0643392eb82471f605adf5b3c8c32ed8f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/921239716d8836596addf0546e2b6a0643392eb82471f605adf5b3c8c32ed8f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/921239716d8836596addf0546e2b6a0643392eb82471f605adf5b3c8c32ed8f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20201109133858-342799",
	                "Source": "/var/lib/docker/volumes/pause-20201109133858-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20201109133858-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20201109133858-342799",
	                "name.minikube.sigs.k8s.io": "pause-20201109133858-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "880435c4d6eb629429bcd0ef96a8959198c30bf27b4248d041687038354e1a0a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33038"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33037"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33036"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33035"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/880435c4d6eb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20201109133858-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.82.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5d1ee083fcd2"
	                    ],
	                    "NetworkID": "23179b3ee58429d5ebe8797ac6d8a341366b6bdcddb25cfa4618d074d7f628c9",
	                    "EndpointID": "791166cf42e26a8f2c833ae0ff625c6b7d1efcc4e57edb6c22eb09c6b0fa002e",
	                    "Gateway": "192.168.82.1",
	                    "IPAddress": "192.168.82.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:52:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20201109133858-342799 -n pause-20201109133858-342799
helpers_test.go:238: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p pause-20201109133858-342799 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p pause-20201109133858-342799 logs -n 25: (3.666263364s)
helpers_test.go:246: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:40:14 UTC, end at Mon 2020-11-09 21:46:06 UTC. --
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.830767729Z" level=info msg="Starting up"
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.834063226Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.834113336Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.834154862Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.834172925Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.836064034Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.836107998Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.836157252Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.836174983Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.878476000Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.123238372Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.123300275Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.123310196Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.123552740Z" level=info msg="Loading containers: start."
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.352798456Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.614416782Z" level=info msg="Removing stale sandbox 7562aa23c4359ee31f453b7d78331337267dcfd40da32f8ac7d319e0ded0ea16 (50e54b941795a53b5a2f568e4512d2658101772235fe9a789b4cfda4bb7b0b0a)"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.617525203Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 2a9c6788ec4e5f9b3e50275253fd4092344a985b996c7ee92f1220b711b70370 ae021a7ac63f5642fd1b92e73a7557932761147f2447beb9a2e52f60ad6b0388], retrying...."
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.642607448Z" level=info msg="There are old running containers, the network config will not take affect"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.652108453Z" level=info msg="Loading containers: done."
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.710522720Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.710638225Z" level=info msg="Daemon has completed initialization"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.731510485Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.731548963Z" level=info msg="API listen on [::]:2376"
	* Nov 09 21:45:35 pause-20201109133858-342799 systemd[1]: Started Docker Application Container Engine.
	* Nov 09 21:45:59 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:59.020880011Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 2f46263465c23       bad58561c4be7       2 seconds ago       Running             storage-provisioner       0                   9715bcf4a684f
	* 35cf3ceecba8a       bfe3a36ebd252       7 seconds ago       Running             coredns                   1                   98b220fe16c56
	* 14eedce1a9425       d373dd5a8593a       8 seconds ago       Running             kube-proxy                1                   e26a8844cb8f2
	* 8ca92a7a9b697       0369cf4303ffd       18 seconds ago      Running             etcd                      1                   40e8e5bae4004
	* b6cfa8b1f67f7       8603821e1a7a5       18 seconds ago      Running             kube-controller-manager   2                   6e2258cfd1d3e
	* 2a6038f870331       2f32d66b884f8       18 seconds ago      Running             kube-scheduler            1                   b1423a15878d5
	* 9712a6d185a5f       607331163122e       18 seconds ago      Running             kube-apiserver            1                   6d5a1f59a5fcf
	* 1834a7dec7a3b       8603821e1a7a5       42 seconds ago      Created             kube-controller-manager   1                   50e54b941795a
	* b3b67c2eed1c1       bfe3a36ebd252       4 minutes ago       Exited              coredns                   0                   0403123a067b1
	* a8205b634d161       d373dd5a8593a       4 minutes ago       Exited              kube-proxy                0                   9af6016feab9b
	* 227da54d6a6f9       2f32d66b884f8       5 minutes ago       Exited              kube-scheduler            0                   4c4a2bbf22241
	* b608f3b390214       0369cf4303ffd       5 minutes ago       Exited              etcd                      0                   64ad079da229e
	* 0e411220a11dc       607331163122e       5 minutes ago       Exited              kube-apiserver            0                   4d15c72bacbcf
	* 
	* ==> coredns [35cf3ceecba8] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> coredns [b3b67c2eed1c] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* 
	* ==> describe nodes <==
	* Name:               pause-20201109133858-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=pause-20201109133858-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=pause-20201109133858-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_41_15_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:41:08 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  pause-20201109133858-342799
	*   AcquireTime:     <unset>
	*   RenewTime:       Mon, 09 Nov 2020 21:45:57 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:45:57 +0000   Mon, 09 Nov 2020 21:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:45:57 +0000   Mon, 09 Nov 2020 21:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:45:57 +0000   Mon, 09 Nov 2020 21:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:45:57 +0000   Mon, 09 Nov 2020 21:41:26 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.82.16
	*   Hostname:    pause-20201109133858-342799
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 4d8e2e9bd90d4aa38d4d73a4ff6ebd00
	*   System UUID:                fe7bca9f-3d05-4886-889c-3f3fc30c7bb5
	*   Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.13
	*   Kubelet Version:            v1.19.2
	*   Kube-Proxy Version:         v1.19.2
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-f9fd979d6-mzdxp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m38s
	*   kube-system                 etcd-pause-20201109133858-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	*   kube-system                 kube-apiserver-pause-20201109133858-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m50s
	*   kube-system                 kube-controller-manager-pause-20201109133858-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m50s
	*   kube-system                 kube-proxy-zsvph                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	*   kube-system                 kube-scheduler-pause-20201109133858-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m50s
	*   kube-system                 storage-provisioner                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                650m (8%)  0 (0%)
	*   memory             70Mi (0%)  170Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                    From        Message
	*   ----    ------                   ----                   ----        -------
	*   Normal  NodeHasNoDiskPressure    5m13s (x7 over 5m19s)  kubelet     Node pause-20201109133858-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeAllocatableEnforced  5m13s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientMemory  5m8s (x8 over 5m19s)   kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasSufficientPID     5m8s (x8 over 5m19s)   kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 4m51s                  kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  4m51s                  kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    4m51s                  kubelet     Node pause-20201109133858-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     4m51s                  kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             4m51s                  kubelet     Node pause-20201109133858-342799 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  4m51s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                4m41s                  kubelet     Node pause-20201109133858-342799 status is now: NodeReady
	*   Normal  Starting                 4m16s                  kube-proxy  Starting kube-proxy.
	*   Normal  Starting                 21s                    kubelet     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  21s                    kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientMemory  20s (x8 over 21s)      kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    20s (x8 over 21s)      kubelet     Node pause-20201109133858-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     20s (x7 over 21s)      kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 8s                     kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [Nov 9 21:28] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:29] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:31] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:32] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +18.886743] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:33] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:34] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +28.771909] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:35] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:37] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:38] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:39] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.249210] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.110709] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +27.722706] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:40] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:41] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +4.475323] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:42] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +28.080349] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:44] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +34.380797] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +1.362576] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:45] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +8.618364] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [8ca92a7a9b69] <==
	* 2020-11-09 21:45:51.409814 I | etcdserver: restarting member 9364466d61f1ec2b in cluster ce8177bd8a545254 at commit index 573
	* raft2020/11/09 21:45:51 INFO: 9364466d61f1ec2b switched to configuration voters=()
	* raft2020/11/09 21:45:51 INFO: 9364466d61f1ec2b became follower at term 2
	* raft2020/11/09 21:45:51 INFO: newRaft 9364466d61f1ec2b [peers: [], term: 2, commit: 573, applied: 0, lastindex: 573, lastterm: 2]
	* 2020-11-09 21:45:51.413859 W | auth: simple token is not cryptographically signed
	* 2020-11-09 21:45:51.424409 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* raft2020/11/09 21:45:51 INFO: 9364466d61f1ec2b switched to configuration voters=(10620691256855096363)
	* 2020-11-09 21:45:51.427358 I | etcdserver/membership: added member 9364466d61f1ec2b [https://192.168.82.16:2380] to cluster ce8177bd8a545254
	* 2020-11-09 21:45:51.427531 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-09 21:45:51.427595 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-09 21:45:51.449134 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-09 21:45:51.449279 I | embed: listening for peers on 192.168.82.16:2380
	* 2020-11-09 21:45:51.449461 I | embed: listening for metrics on http://127.0.0.1:2381
	* raft2020/11/09 21:45:52 INFO: 9364466d61f1ec2b is starting a new election at term 2
	* raft2020/11/09 21:45:52 INFO: 9364466d61f1ec2b became candidate at term 3
	* raft2020/11/09 21:45:52 INFO: 9364466d61f1ec2b received MsgVoteResp from 9364466d61f1ec2b at term 3
	* raft2020/11/09 21:45:52 INFO: 9364466d61f1ec2b became leader at term 3
	* raft2020/11/09 21:45:52 INFO: raft.node: 9364466d61f1ec2b elected leader 9364466d61f1ec2b at term 3
	* 2020-11-09 21:45:52.513665 I | etcdserver: published {Name:pause-20201109133858-342799 ClientURLs:[https://192.168.82.16:2379]} to cluster ce8177bd8a545254
	* 2020-11-09 21:45:52.513692 I | embed: ready to serve client requests
	* 2020-11-09 21:45:52.513714 I | embed: ready to serve client requests
	* 2020-11-09 21:45:52.517002 I | embed: serving client requests on 192.168.82.16:2379
	* 2020-11-09 21:45:52.517050 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:45:59.602997 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:46:06.194212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> etcd [b608f3b39021] <==
	* 2020-11-09 21:44:36.791441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:44:36.796657 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (8.394669287s) to execute
	* 2020-11-09 21:44:36.985758 W | etcdserver: request "header:<ID:17017825011236489887 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:6c2b75aef4824a9e>" with result "size:41" took too long (157.058718ms) to execute
	* 2020-11-09 21:44:46.792554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:44:53.851019 W | etcdserver: request "header:<ID:17017825011236489946 > lease_revoke:<id:6c2b75aef4824a9e>" with result "size:29" took too long (1.78219064s) to execute
	* 2020-11-09 21:44:54.076234 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000199657s) to execute
	* WARNING: 2020/11/09 21:44:54 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:44:54.741676 W | wal: sync duration of 2.455124823s, expected less than 1s
	* 2020-11-09 21:44:55.070246 W | etcdserver: request "header:<ID:17017825011236489947 > lease_revoke:<id:6c2b75aef4824aaa>" with result "size:29" took too long (328.333044ms) to execute
	* 2020-11-09 21:44:55.070657 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (985.784146ms) to execute
	* 2020-11-09 21:44:56.791315 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:44:57.037560 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (338.032352ms) to execute
	* 2020-11-09 21:44:57.037685 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (962.000538ms) to execute
	* 2020-11-09 21:44:57.037767 W | etcdserver: read-only range request "key:\"/registry/storageclasses\" range_end:\"/registry/storageclasset\" count_only:true " with result "range_response_count:0 size:5" took too long (1.507935377s) to execute
	* 2020-11-09 21:45:04.642898 W | etcdserver: request "header:<ID:17017825011236489982 > lease_revoke:<id:6c2b75aef4824acd>" with result "size:29" took too long (784.256115ms) to execute
	* 2020-11-09 21:45:04.659771 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (583.115251ms) to execute
	* 2020-11-09 21:45:04.659939 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (150.062924ms) to execute
	* 2020-11-09 21:45:06.791376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:45:10.355317 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (1.953383369s) to execute
	* 2020-11-09 21:45:16.794906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:45:20.401451 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (1.752742541s) to execute
	* 2020-11-09 21:45:21.057448 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/09 21:45:21 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: operation was canceled". Reconnecting...
	* WARNING: 2020/11/09 21:45:21 grpc: addrConn.createTransport failed to connect to {192.168.82.16:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.82.16:2379: operation was canceled". Reconnecting...
	* 2020-11-09 21:45:21.193205 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> kernel <==
	*  21:46:08 up  1:28,  0 users,  load average: 12.91, 12.49, 8.52
	* Linux pause-20201109133858-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [0e411220a11d] <==
	* W1109 21:45:30.723088       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.730151       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.749426       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.750610       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.754037       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.802327       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.816107       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.837536       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.841165       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.844723       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.846560       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.890200       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.908370       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.910825       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.943155       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.955759       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.961741       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.986308       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.996501       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1109 21:45:31.026267       1 client.go:360] parsed scheme: "passthrough"
	* I1109 21:45:31.026331       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I1109 21:45:31.026344       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* W1109 21:45:31.026664       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:31.064270       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:31.064280       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kube-apiserver [9712a6d185a5] <==
	* I1109 21:45:57.482350       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	* I1109 21:45:57.482374       1 crd_finalizer.go:266] Starting CRDFinalizer
	* I1109 21:45:57.482947       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	* I1109 21:45:57.482960       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	* I1109 21:45:57.482984       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	* I1109 21:45:57.482990       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	* I1109 21:45:57.485177       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I1109 21:45:57.485217       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* I1109 21:45:57.586942       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	* E1109 21:45:57.591217       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	* I1109 21:45:57.681856       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1109 21:45:57.691795       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	* I1109 21:45:57.780267       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* I1109 21:45:57.786177       1 cache.go:39] Caches are synced for autoregister controller
	* I1109 21:45:57.788759       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1109 21:45:58.478907       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I1109 21:45:58.478955       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I1109 21:45:58.487030       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	* I1109 21:46:00.096464       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* I1109 21:46:00.131797       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I1109 21:46:00.224706       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1109 21:46:00.273425       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I1109 21:46:00.286609       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* I1109 21:46:03.524535       1 controller.go:606] quota admission added evaluator for: endpoints
	* I1109 21:46:03.979301       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	* 
	* ==> kube-controller-manager [1834a7dec7a3] <==
	* 
	* ==> kube-controller-manager [b6cfa8b1f67f] <==
	* I1109 21:46:04.011278       1 shared_informer.go:247] Caches are synced for service account 
	* I1109 21:46:04.011257       1 shared_informer.go:247] Caches are synced for TTL 
	* I1109 21:46:04.011408       1 shared_informer.go:247] Caches are synced for GC 
	* I1109 21:46:04.011502       1 shared_informer.go:247] Caches are synced for taint 
	* I1109 21:46:04.011562       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* I1109 21:46:04.011549       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* W1109 21:46:04.011628       1 node_lifecycle_controller.go:1044] Missing timestamp for Node pause-20201109133858-342799. Assuming now as a timestamp.
	* I1109 21:46:04.011671       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I1109 21:46:04.011694       1 shared_informer.go:247] Caches are synced for endpoint 
	* I1109 21:46:04.011981       1 event.go:291] "Event occurred" object="pause-20201109133858-342799" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20201109133858-342799 event: Registered Node pause-20201109133858-342799 in Controller"
	* I1109 21:46:04.020117       1 shared_informer.go:247] Caches are synced for disruption 
	* I1109 21:46:04.020207       1 disruption.go:339] Sending events to api server.
	* I1109 21:46:04.061094       1 shared_informer.go:247] Caches are synced for deployment 
	* I1109 21:46:04.161267       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I1109 21:46:04.177146       1 shared_informer.go:247] Caches are synced for PVC protection 
	* I1109 21:46:04.211723       1 shared_informer.go:247] Caches are synced for attach detach 
	* I1109 21:46:04.211795       1 shared_informer.go:247] Caches are synced for stateful set 
	* I1109 21:46:04.213443       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:46:04.213500       1 shared_informer.go:247] Caches are synced for expand 
	* I1109 21:46:04.261419       1 shared_informer.go:247] Caches are synced for job 
	* I1109 21:46:04.262432       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:46:04.275650       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:46:04.575995       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:46:04.610214       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:46:04.610250       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* 
	* ==> kube-proxy [14eedce1a942] <==
	* I1109 21:45:59.180651       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:45:59.180773       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:45:59.232580       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:45:59.232709       1 server_others.go:186] Using iptables Proxier.
	* W1109 21:45:59.232733       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1109 21:45:59.232743       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1109 21:45:59.233109       1 server.go:650] Version: v1.19.2
	* I1109 21:45:59.233843       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:45:59.234380       1 config.go:315] Starting service config controller
	* I1109 21:45:59.234402       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:45:59.234629       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:45:59.234649       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:45:59.334707       1 shared_informer.go:247] Caches are synced for service config 
	* I1109 21:45:59.334742       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-proxy [a8205b634d16] <==
	* I1109 21:41:51.362277       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:41:51.362386       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:41:51.464157       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:41:51.464305       1 server_others.go:186] Using iptables Proxier.
	* W1109 21:41:51.464321       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1109 21:41:51.464329       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1109 21:41:51.465109       1 server.go:650] Version: v1.19.2
	* I1109 21:41:51.465988       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:41:51.475843       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:41:51.475999       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:41:51.476574       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:41:51.476598       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:41:51.478410       1 config.go:315] Starting service config controller
	* I1109 21:41:51.481826       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:41:51.582874       1 shared_informer.go:247] Caches are synced for service config 
	* I1109 21:41:51.582972       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* E1109 21:43:52.995308       1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": fork/exec /usr/sbin/iptables: permission denied: 
	* I1109 21:43:52.995397       1 proxier.go:850] Sync failed; retrying in 30s
	* W1109 21:43:54.106456       1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": fork/exec /usr/sbin/iptables: permission denied: 
	* E1109 21:44:23.065474       1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": fork/exec /usr/sbin/iptables: permission denied: 
	* I1109 21:44:23.065530       1 proxier.go:850] Sync failed; retrying in 30s
	* W1109 21:44:24.116587       1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": fork/exec /usr/sbin/iptables: permission denied: 
	* W1109 21:44:57.055608       1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": fork/exec /usr/sbin/iptables: permission denied: 
	* E1109 21:44:57.088881       1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": fork/exec /usr/sbin/iptables: permission denied: 
	* I1109 21:44:57.088937       1 proxier.go:850] Sync failed; retrying in 30s
	* 
	* ==> kube-scheduler [227da54d6a6f] <==
	* I1109 21:41:08.391113       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* E1109 21:41:08.394572       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:41:08.394776       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:41:08.394920       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:41:08.395892       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:41:08.395936       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:41:08.395967       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:41:08.397232       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:41:08.397623       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:41:08.404339       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:41:08.404510       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:41:08.404665       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:41:08.404771       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:41:08.406186       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:41:09.243746       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:41:09.244553       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:41:09.259022       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:41:09.287022       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:41:09.324355       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:41:09.371692       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:41:09.399355       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:41:09.504152       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:41:09.548306       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:41:09.698691       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* I1109 21:41:11.691495       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kube-scheduler [2a6038f87033] <==
	* I1109 21:45:50.474801       1 serving.go:331] Generated self-signed cert in-memory
	* W1109 21:45:57.503300       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1109 21:45:57.503350       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W1109 21:45:57.503382       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1109 21:45:57.503402       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1109 21:45:57.564777       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:45:57.564816       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:45:57.576872       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:45:57.576958       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:45:57.578707       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1109 21:45:57.578823       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E1109 21:45:57.666074       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:45:57.666257       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:45:57.666392       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:45:57.666497       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:45:57.666601       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:45:57.666729       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:45:57.666847       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:45:57.666973       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:45:57.667066       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:45:57.667284       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:45:57.667426       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:45:57.667462       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:45:57.690003       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I1109 21:45:59.177179       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:40:14 UTC, end at Mon 2020-11-09 21:46:09 UTC. --
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.218023    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.318171    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.419007    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.520233    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.657595    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.761637    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.858462    5587 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.860777    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/262744fc-0400-43be-b5e5-16316d67e22c-kube-proxy") pod "kube-proxy-zsvph" (UID: "262744fc-0400-43be-b5e5-16316d67e22c")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.866792    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/262744fc-0400-43be-b5e5-16316d67e22c-xtables-lock") pod "kube-proxy-zsvph" (UID: "262744fc-0400-43be-b5e5-16316d67e22c")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.866876    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/262744fc-0400-43be-b5e5-16316d67e22c-lib-modules") pod "kube-proxy-zsvph" (UID: "262744fc-0400-43be-b5e5-16316d67e22c")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.866943    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-dnsvs" (UniqueName: "kubernetes.io/secret/262744fc-0400-43be-b5e5-16316d67e22c-kube-proxy-token-dnsvs") pod "kube-proxy-zsvph" (UID: "262744fc-0400-43be-b5e5-16316d67e22c")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.867839    5587 kubelet_node_status.go:108] Node pause-20201109133858-342799 was previously registered
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.884979    5587 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.885761    5587 kubelet_node_status.go:73] Successfully registered node pause-20201109133858-342799
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.968157    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d5876a87-498a-443f-abe3-db2bf0fd7e31-config-volume") pod "coredns-f9fd979d6-mzdxp" (UID: "d5876a87-498a-443f-abe3-db2bf0fd7e31")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.968215    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-dbq7j" (UniqueName: "kubernetes.io/secret/d5876a87-498a-443f-abe3-db2bf0fd7e31-coredns-token-dbq7j") pod "coredns-f9fd979d6-mzdxp" (UID: "d5876a87-498a-443f-abe3-db2bf0fd7e31")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.968236    5587 reconciler.go:157] Reconciler: start to sync state
	* Nov 09 21:45:59 pause-20201109133858-342799 kubelet[5587]: W1109 21:45:59.011922    5587 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-mzdxp through plugin: invalid network status for
	* Nov 09 21:45:59 pause-20201109133858-342799 kubelet[5587]: W1109 21:45:59.092095    5587 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-mzdxp through plugin: invalid network status for
	* Nov 09 21:45:59 pause-20201109133858-342799 kubelet[5587]: W1109 21:45:59.110760    5587 pod_container_deletor.go:79] Container "98b220fe16c567081fd344529bbd54933380924af767e7700b93a282f7d993a0" not found in pod's containers
	* Nov 09 21:46:00 pause-20201109133858-342799 kubelet[5587]: W1109 21:46:00.187492    5587 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-mzdxp through plugin: invalid network status for
	* Nov 09 21:46:03 pause-20201109133858-342799 kubelet[5587]: I1109 21:46:03.590021    5587 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:46:03 pause-20201109133858-342799 kubelet[5587]: I1109 21:46:03.779453    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f5afbb6b-6d7b-4091-98ee-ae57f5aa5250-tmp") pod "storage-provisioner" (UID: "f5afbb6b-6d7b-4091-98ee-ae57f5aa5250")
	* Nov 09 21:46:03 pause-20201109133858-342799 kubelet[5587]: I1109 21:46:03.779520    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-r2frn" (UniqueName: "kubernetes.io/secret/f5afbb6b-6d7b-4091-98ee-ae57f5aa5250-storage-provisioner-token-r2frn") pod "storage-provisioner" (UID: "f5afbb6b-6d7b-4091-98ee-ae57f5aa5250")
	* Nov 09 21:46:04 pause-20201109133858-342799 kubelet[5587]: E1109 21:46:04.282645    5587 kuberuntime_manager.go:940] PodSandboxStatus of sandbox "9715bcf4a684fbd133c5c1a6161d643b49afe678907895fa894878bf60d48584" for pod "storage-provisioner_kube-system(f5afbb6b-6d7b-4091-98ee-ae57f5aa5250)" error: rpc error: code = Unknown desc = Error: No such container: 9715bcf4a684fbd133c5c1a6161d643b49afe678907895fa894878bf60d48584
	* 
	* ==> storage-provisioner [2f46263465c2] <==
	* I1109 21:46:04.889375       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1109 21:46:04.902664       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1109 21:46:04.902856       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f0bb1955-ed85-4eb2-b6ef-65b97551da7b", APIVersion:"v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20201109133858-342799_23a9bd5b-76c9-41bb-a187-8f9ef05d25d3 became leader
	* I1109 21:46:04.902991       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_pause-20201109133858-342799_23a9bd5b-76c9-41bb-a187-8f9ef05d25d3!
	* I1109 21:46:05.003294       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_pause-20201109133858-342799_23a9bd5b-76c9-41bb-a187-8f9ef05d25d3!

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:46:08.139083  515408 out.go:286] unable to execute * 2020-11-09 21:44:36.985758 W | etcdserver: request "header:<ID:17017825011236489887 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:6c2b75aef4824a9e>" with result "size:41" took too long (157.058718ms) to execute
	: html/template:* 2020-11-09 21:44:36.985758 W | etcdserver: request "header:<ID:17017825011236489887 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:6c2b75aef4824a9e>" with result "size:41" took too long (157.058718ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20201109133858-342799 -n pause-20201109133858-342799
helpers_test.go:255: (dbg) Run:  kubectl --context pause-20201109133858-342799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context pause-20201109133858-342799 describe pod 
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context pause-20201109133858-342799 describe pod : exit status 1 (107.636613ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context pause-20201109133858-342799 describe pod : exit status 1
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect pause-20201109133858-342799
helpers_test.go:229: (dbg) docker inspect pause-20201109133858-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab",
	        "Created": "2020-11-09T21:40:09.542396209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 447056,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:40:10.202530718Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab/hosts",
	        "LogPath": "/var/lib/docker/containers/5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab/5d1ee083fcd20a82d672fad4b54f75fd99cd7ba13a37f8feba033b09efdc49ab-json.log",
	        "Name": "/pause-20201109133858-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20201109133858-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20201109133858-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 1887436800,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/921239716d8836596addf0546e2b6a0643392eb82471f605adf5b3c8c32ed8f4-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/921239716d8836596addf0546e2b6a0643392eb82471f605adf5b3c8c32ed8f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/921239716d8836596addf0546e2b6a0643392eb82471f605adf5b3c8c32ed8f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/921239716d8836596addf0546e2b6a0643392eb82471f605adf5b3c8c32ed8f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20201109133858-342799",
	                "Source": "/var/lib/docker/volumes/pause-20201109133858-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20201109133858-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20201109133858-342799",
	                "name.minikube.sigs.k8s.io": "pause-20201109133858-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "880435c4d6eb629429bcd0ef96a8959198c30bf27b4248d041687038354e1a0a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33038"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33037"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33036"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33035"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/880435c4d6eb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20201109133858-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.82.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5d1ee083fcd2"
	                    ],
	                    "NetworkID": "23179b3ee58429d5ebe8797ac6d8a341366b6bdcddb25cfa4618d074d7f628c9",
	                    "EndpointID": "791166cf42e26a8f2c833ae0ff625c6b7d1efcc4e57edb6c22eb09c6b0fa002e",
	                    "Gateway": "192.168.82.1",
	                    "IPAddress": "192.168.82.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:52:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20201109133858-342799 -n pause-20201109133858-342799
helpers_test.go:238: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p pause-20201109133858-342799 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p pause-20201109133858-342799 logs -n 25: (3.315015636s)
helpers_test.go:246: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:40:14 UTC, end at Mon 2020-11-09 21:46:12 UTC. --
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.830767729Z" level=info msg="Starting up"
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.834063226Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.834113336Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.834154862Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.834172925Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.836064034Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.836107998Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.836157252Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.836174983Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:45:34 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:34.878476000Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.123238372Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.123300275Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.123310196Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.123552740Z" level=info msg="Loading containers: start."
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.352798456Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.614416782Z" level=info msg="Removing stale sandbox 7562aa23c4359ee31f453b7d78331337267dcfd40da32f8ac7d319e0ded0ea16 (50e54b941795a53b5a2f568e4512d2658101772235fe9a789b4cfda4bb7b0b0a)"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.617525203Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 2a9c6788ec4e5f9b3e50275253fd4092344a985b996c7ee92f1220b711b70370 ae021a7ac63f5642fd1b92e73a7557932761147f2447beb9a2e52f60ad6b0388], retrying...."
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.642607448Z" level=info msg="There are old running containers, the network config will not take affect"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.652108453Z" level=info msg="Loading containers: done."
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.710522720Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.710638225Z" level=info msg="Daemon has completed initialization"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.731510485Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 09 21:45:35 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:35.731548963Z" level=info msg="API listen on [::]:2376"
	* Nov 09 21:45:35 pause-20201109133858-342799 systemd[1]: Started Docker Application Container Engine.
	* Nov 09 21:45:59 pause-20201109133858-342799 dockerd[4946]: time="2020-11-09T21:45:59.020880011Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 2f46263465c23       bad58561c4be7       8 seconds ago       Running             storage-provisioner       0                   9715bcf4a684f
	* 35cf3ceecba8a       bfe3a36ebd252       13 seconds ago      Running             coredns                   1                   98b220fe16c56
	* 14eedce1a9425       d373dd5a8593a       14 seconds ago      Running             kube-proxy                1                   e26a8844cb8f2
	* 8ca92a7a9b697       0369cf4303ffd       24 seconds ago      Running             etcd                      1                   40e8e5bae4004
	* b6cfa8b1f67f7       8603821e1a7a5       24 seconds ago      Running             kube-controller-manager   2                   6e2258cfd1d3e
	* 2a6038f870331       2f32d66b884f8       24 seconds ago      Running             kube-scheduler            1                   b1423a15878d5
	* 9712a6d185a5f       607331163122e       24 seconds ago      Running             kube-apiserver            1                   6d5a1f59a5fcf
	* 1834a7dec7a3b       8603821e1a7a5       48 seconds ago      Created             kube-controller-manager   1                   50e54b941795a
	* b3b67c2eed1c1       bfe3a36ebd252       4 minutes ago       Exited              coredns                   0                   0403123a067b1
	* a8205b634d161       d373dd5a8593a       4 minutes ago       Exited              kube-proxy                0                   9af6016feab9b
	* 227da54d6a6f9       2f32d66b884f8       5 minutes ago       Exited              kube-scheduler            0                   4c4a2bbf22241
	* b608f3b390214       0369cf4303ffd       5 minutes ago       Exited              etcd                      0                   64ad079da229e
	* 0e411220a11dc       607331163122e       5 minutes ago       Exited              kube-apiserver            0                   4d15c72bacbcf
	* 
	* ==> coredns [35cf3ceecba8] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> coredns [b3b67c2eed1c] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* 
	* ==> describe nodes <==
	* Name:               pause-20201109133858-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=pause-20201109133858-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=pause-20201109133858-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_41_15_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:41:08 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  pause-20201109133858-342799
	*   AcquireTime:     <unset>
	*   RenewTime:       Mon, 09 Nov 2020 21:46:07 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:45:57 +0000   Mon, 09 Nov 2020 21:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:45:57 +0000   Mon, 09 Nov 2020 21:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:45:57 +0000   Mon, 09 Nov 2020 21:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:45:57 +0000   Mon, 09 Nov 2020 21:41:26 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.82.16
	*   Hostname:    pause-20201109133858-342799
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 4d8e2e9bd90d4aa38d4d73a4ff6ebd00
	*   System UUID:                fe7bca9f-3d05-4886-889c-3f3fc30c7bb5
	*   Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.13
	*   Kubelet Version:            v1.19.2
	*   Kube-Proxy Version:         v1.19.2
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-f9fd979d6-mzdxp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m43s
	*   kube-system                 etcd-pause-20201109133858-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	*   kube-system                 kube-apiserver-pause-20201109133858-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	*   kube-system                 kube-controller-manager-pause-20201109133858-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	*   kube-system                 kube-proxy-zsvph                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	*   kube-system                 kube-scheduler-pause-20201109133858-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	*   kube-system                 storage-provisioner                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                650m (8%)  0 (0%)
	*   memory             70Mi (0%)  170Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                    From        Message
	*   ----    ------                   ----                   ----        -------
	*   Normal  NodeHasNoDiskPressure    5m18s (x7 over 5m24s)  kubelet     Node pause-20201109133858-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeAllocatableEnforced  5m18s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientMemory  5m13s (x8 over 5m24s)  kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasSufficientPID     5m13s (x8 over 5m24s)  kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 4m56s                  kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  4m56s                  kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    4m56s                  kubelet     Node pause-20201109133858-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     4m56s                  kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             4m56s                  kubelet     Node pause-20201109133858-342799 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  4m56s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                4m46s                  kubelet     Node pause-20201109133858-342799 status is now: NodeReady
	*   Normal  Starting                 4m21s                  kube-proxy  Starting kube-proxy.
	*   Normal  Starting                 26s                    kubelet     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  26s                    kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientMemory  25s (x8 over 26s)      kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    25s (x8 over 26s)      kubelet     Node pause-20201109133858-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     25s (x7 over 26s)      kubelet     Node pause-20201109133858-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 13s                    kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [Nov 9 21:28] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:29] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:31] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:32] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +18.886743] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:33] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:34] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +28.771909] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:35] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:37] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:38] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:39] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.249210] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.110709] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +27.722706] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:40] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:41] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +4.475323] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:42] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +28.080349] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:44] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +34.380797] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +1.362576] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:45] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +8.618364] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [8ca92a7a9b69] <==
	* 2020-11-09 21:45:51.409814 I | etcdserver: restarting member 9364466d61f1ec2b in cluster ce8177bd8a545254 at commit index 573
	* raft2020/11/09 21:45:51 INFO: 9364466d61f1ec2b switched to configuration voters=()
	* raft2020/11/09 21:45:51 INFO: 9364466d61f1ec2b became follower at term 2
	* raft2020/11/09 21:45:51 INFO: newRaft 9364466d61f1ec2b [peers: [], term: 2, commit: 573, applied: 0, lastindex: 573, lastterm: 2]
	* 2020-11-09 21:45:51.413859 W | auth: simple token is not cryptographically signed
	* 2020-11-09 21:45:51.424409 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* raft2020/11/09 21:45:51 INFO: 9364466d61f1ec2b switched to configuration voters=(10620691256855096363)
	* 2020-11-09 21:45:51.427358 I | etcdserver/membership: added member 9364466d61f1ec2b [https://192.168.82.16:2380] to cluster ce8177bd8a545254
	* 2020-11-09 21:45:51.427531 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-09 21:45:51.427595 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-09 21:45:51.449134 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-09 21:45:51.449279 I | embed: listening for peers on 192.168.82.16:2380
	* 2020-11-09 21:45:51.449461 I | embed: listening for metrics on http://127.0.0.1:2381
	* raft2020/11/09 21:45:52 INFO: 9364466d61f1ec2b is starting a new election at term 2
	* raft2020/11/09 21:45:52 INFO: 9364466d61f1ec2b became candidate at term 3
	* raft2020/11/09 21:45:52 INFO: 9364466d61f1ec2b received MsgVoteResp from 9364466d61f1ec2b at term 3
	* raft2020/11/09 21:45:52 INFO: 9364466d61f1ec2b became leader at term 3
	* raft2020/11/09 21:45:52 INFO: raft.node: 9364466d61f1ec2b elected leader 9364466d61f1ec2b at term 3
	* 2020-11-09 21:45:52.513665 I | etcdserver: published {Name:pause-20201109133858-342799 ClientURLs:[https://192.168.82.16:2379]} to cluster ce8177bd8a545254
	* 2020-11-09 21:45:52.513692 I | embed: ready to serve client requests
	* 2020-11-09 21:45:52.513714 I | embed: ready to serve client requests
	* 2020-11-09 21:45:52.517002 I | embed: serving client requests on 192.168.82.16:2379
	* 2020-11-09 21:45:52.517050 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:45:59.602997 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:46:06.194212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> etcd [b608f3b39021] <==
	* 2020-11-09 21:44:36.791441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:44:36.796657 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (8.394669287s) to execute
	* 2020-11-09 21:44:36.985758 W | etcdserver: request "header:<ID:17017825011236489887 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:6c2b75aef4824a9e>" with result "size:41" took too long (157.058718ms) to execute
	* 2020-11-09 21:44:46.792554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:44:53.851019 W | etcdserver: request "header:<ID:17017825011236489946 > lease_revoke:<id:6c2b75aef4824a9e>" with result "size:29" took too long (1.78219064s) to execute
	* 2020-11-09 21:44:54.076234 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000199657s) to execute
	* WARNING: 2020/11/09 21:44:54 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:44:54.741676 W | wal: sync duration of 2.455124823s, expected less than 1s
	* 2020-11-09 21:44:55.070246 W | etcdserver: request "header:<ID:17017825011236489947 > lease_revoke:<id:6c2b75aef4824aaa>" with result "size:29" took too long (328.333044ms) to execute
	* 2020-11-09 21:44:55.070657 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (985.784146ms) to execute
	* 2020-11-09 21:44:56.791315 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:44:57.037560 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (338.032352ms) to execute
	* 2020-11-09 21:44:57.037685 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (962.000538ms) to execute
	* 2020-11-09 21:44:57.037767 W | etcdserver: read-only range request "key:\"/registry/storageclasses\" range_end:\"/registry/storageclasset\" count_only:true " with result "range_response_count:0 size:5" took too long (1.507935377s) to execute
	* 2020-11-09 21:45:04.642898 W | etcdserver: request "header:<ID:17017825011236489982 > lease_revoke:<id:6c2b75aef4824acd>" with result "size:29" took too long (784.256115ms) to execute
	* 2020-11-09 21:45:04.659771 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (583.115251ms) to execute
	* 2020-11-09 21:45:04.659939 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (150.062924ms) to execute
	* 2020-11-09 21:45:06.791376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:45:10.355317 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (1.953383369s) to execute
	* 2020-11-09 21:45:16.794906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:45:20.401451 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (1.752742541s) to execute
	* 2020-11-09 21:45:21.057448 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/09 21:45:21 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: operation was canceled". Reconnecting...
	* WARNING: 2020/11/09 21:45:21 grpc: addrConn.createTransport failed to connect to {192.168.82.16:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.82.16:2379: operation was canceled". Reconnecting...
	* 2020-11-09 21:45:21.193205 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> kernel <==
	*  21:46:13 up  1:28,  0 users,  load average: 12.92, 12.50, 8.54
	* Linux pause-20201109133858-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [0e411220a11d] <==
	* W1109 21:45:30.723088       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.730151       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.749426       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.750610       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.754037       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.802327       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.816107       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.837536       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.841165       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.844723       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.846560       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.890200       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.908370       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.910825       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.943155       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.955759       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.961741       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.986308       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:30.996501       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1109 21:45:31.026267       1 client.go:360] parsed scheme: "passthrough"
	* I1109 21:45:31.026331       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I1109 21:45:31.026344       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* W1109 21:45:31.026664       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:31.064270       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:45:31.064280       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kube-apiserver [9712a6d185a5] <==
	* I1109 21:45:57.482350       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	* I1109 21:45:57.482374       1 crd_finalizer.go:266] Starting CRDFinalizer
	* I1109 21:45:57.482947       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	* I1109 21:45:57.482960       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	* I1109 21:45:57.482984       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	* I1109 21:45:57.482990       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	* I1109 21:45:57.485177       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I1109 21:45:57.485217       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* I1109 21:45:57.586942       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	* E1109 21:45:57.591217       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	* I1109 21:45:57.681856       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1109 21:45:57.691795       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	* I1109 21:45:57.780267       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* I1109 21:45:57.786177       1 cache.go:39] Caches are synced for autoregister controller
	* I1109 21:45:57.788759       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1109 21:45:58.478907       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I1109 21:45:58.478955       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I1109 21:45:58.487030       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	* I1109 21:46:00.096464       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* I1109 21:46:00.131797       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I1109 21:46:00.224706       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1109 21:46:00.273425       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I1109 21:46:00.286609       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* I1109 21:46:03.524535       1 controller.go:606] quota admission added evaluator for: endpoints
	* I1109 21:46:03.979301       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	* 
	* ==> kube-controller-manager [1834a7dec7a3] <==
	* 
	* ==> kube-controller-manager [b6cfa8b1f67f] <==
	* I1109 21:46:04.011278       1 shared_informer.go:247] Caches are synced for service account 
	* I1109 21:46:04.011257       1 shared_informer.go:247] Caches are synced for TTL 
	* I1109 21:46:04.011408       1 shared_informer.go:247] Caches are synced for GC 
	* I1109 21:46:04.011502       1 shared_informer.go:247] Caches are synced for taint 
	* I1109 21:46:04.011562       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* I1109 21:46:04.011549       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* W1109 21:46:04.011628       1 node_lifecycle_controller.go:1044] Missing timestamp for Node pause-20201109133858-342799. Assuming now as a timestamp.
	* I1109 21:46:04.011671       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I1109 21:46:04.011694       1 shared_informer.go:247] Caches are synced for endpoint 
	* I1109 21:46:04.011981       1 event.go:291] "Event occurred" object="pause-20201109133858-342799" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20201109133858-342799 event: Registered Node pause-20201109133858-342799 in Controller"
	* I1109 21:46:04.020117       1 shared_informer.go:247] Caches are synced for disruption 
	* I1109 21:46:04.020207       1 disruption.go:339] Sending events to api server.
	* I1109 21:46:04.061094       1 shared_informer.go:247] Caches are synced for deployment 
	* I1109 21:46:04.161267       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I1109 21:46:04.177146       1 shared_informer.go:247] Caches are synced for PVC protection 
	* I1109 21:46:04.211723       1 shared_informer.go:247] Caches are synced for attach detach 
	* I1109 21:46:04.211795       1 shared_informer.go:247] Caches are synced for stateful set 
	* I1109 21:46:04.213443       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:46:04.213500       1 shared_informer.go:247] Caches are synced for expand 
	* I1109 21:46:04.261419       1 shared_informer.go:247] Caches are synced for job 
	* I1109 21:46:04.262432       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:46:04.275650       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:46:04.575995       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:46:04.610214       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:46:04.610250       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* 
	* ==> kube-proxy [14eedce1a942] <==
	* I1109 21:45:59.180651       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:45:59.180773       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:45:59.232580       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:45:59.232709       1 server_others.go:186] Using iptables Proxier.
	* W1109 21:45:59.232733       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1109 21:45:59.232743       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1109 21:45:59.233109       1 server.go:650] Version: v1.19.2
	* I1109 21:45:59.233843       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:45:59.234380       1 config.go:315] Starting service config controller
	* I1109 21:45:59.234402       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:45:59.234629       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:45:59.234649       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:45:59.334707       1 shared_informer.go:247] Caches are synced for service config 
	* I1109 21:45:59.334742       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-proxy [a8205b634d16] <==
	* I1109 21:41:51.362277       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:41:51.362386       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:41:51.464157       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:41:51.464305       1 server_others.go:186] Using iptables Proxier.
	* W1109 21:41:51.464321       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1109 21:41:51.464329       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1109 21:41:51.465109       1 server.go:650] Version: v1.19.2
	* I1109 21:41:51.465988       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:41:51.475843       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:41:51.475999       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:41:51.476574       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:41:51.476598       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:41:51.478410       1 config.go:315] Starting service config controller
	* I1109 21:41:51.481826       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:41:51.582874       1 shared_informer.go:247] Caches are synced for service config 
	* I1109 21:41:51.582972       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* E1109 21:43:52.995308       1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": fork/exec /usr/sbin/iptables: permission denied: 
	* I1109 21:43:52.995397       1 proxier.go:850] Sync failed; retrying in 30s
	* W1109 21:43:54.106456       1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": fork/exec /usr/sbin/iptables: permission denied: 
	* E1109 21:44:23.065474       1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": fork/exec /usr/sbin/iptables: permission denied: 
	* I1109 21:44:23.065530       1 proxier.go:850] Sync failed; retrying in 30s
	* W1109 21:44:24.116587       1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": fork/exec /usr/sbin/iptables: permission denied: 
	* W1109 21:44:57.055608       1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": fork/exec /usr/sbin/iptables: permission denied: 
	* E1109 21:44:57.088881       1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": fork/exec /usr/sbin/iptables: permission denied: 
	* I1109 21:44:57.088937       1 proxier.go:850] Sync failed; retrying in 30s
	* 
	* ==> kube-scheduler [227da54d6a6f] <==
	* I1109 21:41:08.391113       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* E1109 21:41:08.394572       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:41:08.394776       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:41:08.394920       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:41:08.395892       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:41:08.395936       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:41:08.395967       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:41:08.397232       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:41:08.397623       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:41:08.404339       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:41:08.404510       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:41:08.404665       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:41:08.404771       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:41:08.406186       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:41:09.243746       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:41:09.244553       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:41:09.259022       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:41:09.287022       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:41:09.324355       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:41:09.371692       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:41:09.399355       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:41:09.504152       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:41:09.548306       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:41:09.698691       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* I1109 21:41:11.691495       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kube-scheduler [2a6038f87033] <==
	* I1109 21:45:50.474801       1 serving.go:331] Generated self-signed cert in-memory
	* W1109 21:45:57.503300       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1109 21:45:57.503350       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W1109 21:45:57.503382       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1109 21:45:57.503402       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1109 21:45:57.564777       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:45:57.564816       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:45:57.576872       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:45:57.576958       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:45:57.578707       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1109 21:45:57.578823       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E1109 21:45:57.666074       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:45:57.666257       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:45:57.666392       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:45:57.666497       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:45:57.666601       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:45:57.666729       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:45:57.666847       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:45:57.666973       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:45:57.667066       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:45:57.667284       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:45:57.667426       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:45:57.667462       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:45:57.690003       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I1109 21:45:59.177179       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:40:14 UTC, end at Mon 2020-11-09 21:46:14 UTC. --
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.218023    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.318171    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.419007    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.520233    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.657595    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: E1109 21:45:57.761637    5587 kubelet.go:2183] node "pause-20201109133858-342799" not found
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.858462    5587 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.860777    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/262744fc-0400-43be-b5e5-16316d67e22c-kube-proxy") pod "kube-proxy-zsvph" (UID: "262744fc-0400-43be-b5e5-16316d67e22c")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.866792    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/262744fc-0400-43be-b5e5-16316d67e22c-xtables-lock") pod "kube-proxy-zsvph" (UID: "262744fc-0400-43be-b5e5-16316d67e22c")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.866876    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/262744fc-0400-43be-b5e5-16316d67e22c-lib-modules") pod "kube-proxy-zsvph" (UID: "262744fc-0400-43be-b5e5-16316d67e22c")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.866943    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-dnsvs" (UniqueName: "kubernetes.io/secret/262744fc-0400-43be-b5e5-16316d67e22c-kube-proxy-token-dnsvs") pod "kube-proxy-zsvph" (UID: "262744fc-0400-43be-b5e5-16316d67e22c")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.867839    5587 kubelet_node_status.go:108] Node pause-20201109133858-342799 was previously registered
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.884979    5587 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.885761    5587 kubelet_node_status.go:73] Successfully registered node pause-20201109133858-342799
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.968157    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d5876a87-498a-443f-abe3-db2bf0fd7e31-config-volume") pod "coredns-f9fd979d6-mzdxp" (UID: "d5876a87-498a-443f-abe3-db2bf0fd7e31")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.968215    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-dbq7j" (UniqueName: "kubernetes.io/secret/d5876a87-498a-443f-abe3-db2bf0fd7e31-coredns-token-dbq7j") pod "coredns-f9fd979d6-mzdxp" (UID: "d5876a87-498a-443f-abe3-db2bf0fd7e31")
	* Nov 09 21:45:57 pause-20201109133858-342799 kubelet[5587]: I1109 21:45:57.968236    5587 reconciler.go:157] Reconciler: start to sync state
	* Nov 09 21:45:59 pause-20201109133858-342799 kubelet[5587]: W1109 21:45:59.011922    5587 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-mzdxp through plugin: invalid network status for
	* Nov 09 21:45:59 pause-20201109133858-342799 kubelet[5587]: W1109 21:45:59.092095    5587 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-mzdxp through plugin: invalid network status for
	* Nov 09 21:45:59 pause-20201109133858-342799 kubelet[5587]: W1109 21:45:59.110760    5587 pod_container_deletor.go:79] Container "98b220fe16c567081fd344529bbd54933380924af767e7700b93a282f7d993a0" not found in pod's containers
	* Nov 09 21:46:00 pause-20201109133858-342799 kubelet[5587]: W1109 21:46:00.187492    5587 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-mzdxp through plugin: invalid network status for
	* Nov 09 21:46:03 pause-20201109133858-342799 kubelet[5587]: I1109 21:46:03.590021    5587 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:46:03 pause-20201109133858-342799 kubelet[5587]: I1109 21:46:03.779453    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f5afbb6b-6d7b-4091-98ee-ae57f5aa5250-tmp") pod "storage-provisioner" (UID: "f5afbb6b-6d7b-4091-98ee-ae57f5aa5250")
	* Nov 09 21:46:03 pause-20201109133858-342799 kubelet[5587]: I1109 21:46:03.779520    5587 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-r2frn" (UniqueName: "kubernetes.io/secret/f5afbb6b-6d7b-4091-98ee-ae57f5aa5250-storage-provisioner-token-r2frn") pod "storage-provisioner" (UID: "f5afbb6b-6d7b-4091-98ee-ae57f5aa5250")
	* Nov 09 21:46:04 pause-20201109133858-342799 kubelet[5587]: E1109 21:46:04.282645    5587 kuberuntime_manager.go:940] PodSandboxStatus of sandbox "9715bcf4a684fbd133c5c1a6161d643b49afe678907895fa894878bf60d48584" for pod "storage-provisioner_kube-system(f5afbb6b-6d7b-4091-98ee-ae57f5aa5250)" error: rpc error: code = Unknown desc = Error: No such container: 9715bcf4a684fbd133c5c1a6161d643b49afe678907895fa894878bf60d48584
	* 
	* ==> storage-provisioner [2f46263465c2] <==
	* I1109 21:46:04.889375       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1109 21:46:04.902664       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1109 21:46:04.902856       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f0bb1955-ed85-4eb2-b6ef-65b97551da7b", APIVersion:"v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20201109133858-342799_23a9bd5b-76c9-41bb-a187-8f9ef05d25d3 became leader
	* I1109 21:46:04.902991       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_pause-20201109133858-342799_23a9bd5b-76c9-41bb-a187-8f9ef05d25d3!
	* I1109 21:46:05.003294       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_pause-20201109133858-342799_23a9bd5b-76c9-41bb-a187-8f9ef05d25d3!

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:46:13.165490  517189 out.go:286] unable to execute * 2020-11-09 21:44:36.985758 W | etcdserver: request "header:<ID:17017825011236489887 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:6c2b75aef4824a9e>" with result "size:41" took too long (157.058718ms) to execute
	: html/template:* 2020-11-09 21:44:36.985758 W | etcdserver: request "header:<ID:17017825011236489887 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:6c2b75aef4824a9e>" with result "size:41" took too long (157.058718ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20201109133858-342799 -n pause-20201109133858-342799
helpers_test.go:255: (dbg) Run:  kubectl --context pause-20201109133858-342799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context pause-20201109133858-342799 describe pod 
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context pause-20201109133858-342799 describe pod : exit status 1 (96.786892ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context pause-20201109133858-342799 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (261.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (9.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20201109134632-342799 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201109132758-342799
start_stop_delete_test.go:232: v1.19.2 images mismatch (-want +got):
[]string{
- 	"docker.io/kubernetesui/dashboard:v2.0.3",
- 	"docker.io/kubernetesui/metrics-scraper:v1.0.4",
	"gcr.io/k8s-minikube/storage-provisioner:v3",
	"k8s.gcr.io/coredns:1.7.0",
	... // 4 identical elements
	"k8s.gcr.io/kube-scheduler:v1.19.2",
	"k8s.gcr.io/pause:3.2",
+ 	"kubernetesui/dashboard:v2.0.3",
+ 	"kubernetesui/metrics-scraper:v1.0.4",
}
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect embed-certs-20201109134632-342799
helpers_test.go:229: (dbg) docker inspect embed-certs-20201109134632-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c",
	        "Created": "2020-11-09T21:46:35.169197379Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 537246,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:48:10.761195454Z",
	            "FinishedAt": "2020-11-09T21:48:08.645832477Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c/hosts",
	        "LogPath": "/var/lib/docker/containers/2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c/2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c-json.log",
	        "Name": "/embed-certs-20201109134632-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20201109134632-342799:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20201109134632-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c485e1cf0e1c0772684f642708fce9e29b43d8788cec5b73985963327c635851-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c485e1cf0e1c0772684f642708fce9e29b43d8788cec5b73985963327c635851/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c485e1cf0e1c0772684f642708fce9e29b43d8788cec5b73985963327c635851/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c485e1cf0e1c0772684f642708fce9e29b43d8788cec5b73985963327c635851/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20201109134632-342799",
	                "Source": "/var/lib/docker/volumes/embed-certs-20201109134632-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20201109134632-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20201109134632-342799",
	                "name.minikube.sigs.k8s.io": "embed-certs-20201109134632-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "feba36f89d2e264ca4e96e48cb6c266b5e4bfb878f049b9ad1485995fe4e8f2b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/feba36f89d2e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20201109134632-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.82.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2ab3e54942ab"
	                    ],
	                    "NetworkID": "7d4a4069a7a39b32a908237208688e65956395417594330a38f4e30f4264b15e",
	                    "EndpointID": "355aad7c29d5f9da5b6acb40ce0b830d8b3d2ca1aa8fde75ea0f8b5a6b881e8b",
	                    "Gateway": "192.168.82.1",
	                    "IPAddress": "192.168.82.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:52:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799
helpers_test.go:238: <<< TestStartStop/group/embed-certs/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20201109134632-342799 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20201109134632-342799 logs -n 25: (3.417461863s)
helpers_test.go:246: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:48:11 UTC, end at Mon 2020-11-09 21:49:15 UTC. --
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 systemd[1]: Stopped Docker Application Container Engine.
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 systemd[1]: Starting Docker Application Container Engine...
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.926996416Z" level=info msg="Starting up"
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.929528871Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.929566395Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.929607422Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.929621697Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.931480953Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.931527972Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.931556550Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.931577379Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.955262757Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.966070426Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.966107766Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.966116559Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.966311872Z" level=info msg="Loading containers: start."
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.155133270Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.217051712Z" level=info msg="Loading containers: done."
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.254707158Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.254829293Z" level=info msg="Daemon has completed initialization"
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.272067536Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.272087534Z" level=info msg="API listen on [::]:2376"
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 systemd[1]: Started Docker Application Container Engine.
	* Nov 09 21:48:38 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:38.981010582Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Nov 09 21:49:09 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:49:09.261187279Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                             CREATED              STATE               NAME                        ATTEMPT             POD ID
	* cfbccfa0e96b2       86262685d9abb                                                                     14 seconds ago       Running             dashboard-metrics-scraper   0                   7d511e9327215
	* e8b8095c5dcec       503bc4b7440b9                                                                     14 seconds ago       Running             kubernetes-dashboard        0                   79ad0b0c78dcb
	* 824d17be8ab7f       bad58561c4be7                                                                     37 seconds ago       Exited              storage-provisioner         1                   aaa2d57b38eca
	* e73b51e98fd2f       d373dd5a8593a                                                                     37 seconds ago       Running             kube-proxy                  1                   f6d7e33736539
	* 260a2b944f295       56cc512116c8f                                                                     37 seconds ago       Running             busybox                     1                   657250caff984
	* f5a56caffa066       bfe3a36ebd252                                                                     37 seconds ago       Running             coredns                     1                   59afee3de02ae
	* 7a246252cc2b4       0369cf4303ffd                                                                     48 seconds ago       Running             etcd                        1                   37590f6aabdba
	* c7a8df7cab257       2f32d66b884f8                                                                     48 seconds ago       Running             kube-scheduler              1                   a639ed0109409
	* 9cf56787377be       607331163122e                                                                     48 seconds ago       Running             kube-apiserver              1                   a64452fc3b7ea
	* 1ac183bbf19f7       8603821e1a7a5                                                                     48 seconds ago       Running             kube-controller-manager     1                   488d251e9411e
	* 56f169b029f63       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1   About a minute ago   Exited              busybox                     0                   838c43ab87ded
	* f493dd916a998       bfe3a36ebd252                                                                     About a minute ago   Exited              coredns                     0                   6a88b05c92729
	* 0497c03cee18b       d373dd5a8593a                                                                     About a minute ago   Exited              kube-proxy                  0                   8b0754311bc40
	* acf3b4e626dec       0369cf4303ffd                                                                     2 minutes ago        Exited              etcd                        0                   7a52c10f15778
	* 1e430985e8952       2f32d66b884f8                                                                     2 minutes ago        Exited              kube-scheduler              0                   981384ffa04eb
	* 571aac763dbf2       8603821e1a7a5                                                                     2 minutes ago        Exited              kube-controller-manager     0                   1a652bb6cc724
	* 5f47fadca78c3       607331163122e                                                                     2 minutes ago        Exited              kube-apiserver              0                   0241eed42b3e4
	* 
	* ==> coredns [f493dd916a99] <==
	* E1109 21:47:58.182967       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=458&timeout=7m7s&timeoutSeconds=427&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:47:58.182980       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=156&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:47:58.182980       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=220&timeout=8m26s&timeoutSeconds=506&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* 
	* ==> coredns [f5a56caffa06] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20201109134632-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=embed-certs-20201109134632-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=embed-certs-20201109134632-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_47_21_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:47:18 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  embed-certs-20201109134632-342799
	*   AcquireTime:     <unset>
	*   RenewTime:       Mon, 09 Nov 2020 21:49:09 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:48:35 +0000   Mon, 09 Nov 2020 21:47:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:48:35 +0000   Mon, 09 Nov 2020 21:47:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:48:35 +0000   Mon, 09 Nov 2020 21:47:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:48:35 +0000   Mon, 09 Nov 2020 21:47:32 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.82.16
	*   Hostname:    embed-certs-20201109134632-342799
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 ff05727156aa489e8256f49dc42ef0bf
	*   System UUID:                23a0b8f9-e92b-4fd4-ae6c-7935ad2706cf
	*   Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.13
	*   Kubelet Version:            v1.19.2
	*   Kube-Proxy Version:         v1.19.2
	* Non-terminated Pods:          (10 in total)
	*   Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	*   default                     busybox                                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	*   kube-system                 coredns-f9fd979d6-7gxsc                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     100s
	*   kube-system                 etcd-embed-certs-20201109134632-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	*   kube-system                 kube-apiserver-embed-certs-20201109134632-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	*   kube-system                 kube-controller-manager-embed-certs-20201109134632-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	*   kube-system                 kube-proxy-8j529                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	*   kube-system                 kube-scheduler-embed-certs-20201109134632-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	*   kube-system                 storage-provisioner                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	*   kubernetes-dashboard        dashboard-metrics-scraper-c95fcf479-zw8m7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	*   kubernetes-dashboard        kubernetes-dashboard-584f46694c-nqgzr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                650m (8%)  0 (0%)
	*   memory             70Mi (0%)  170Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                  From        Message
	*   ----    ------                   ----                 ----        -------
	*   Normal  Starting                 2m6s                 kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  2m6s (x4 over 2m6s)  kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2m6s (x3 over 2m6s)  kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2m6s (x3 over 2m6s)  kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  2m6s                 kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 114s                 kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  114s                 kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    114s                 kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     114s                 kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             114s                 kubelet     Node embed-certs-20201109134632-342799 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  113s                 kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                104s                 kubelet     Node embed-certs-20201109134632-342799 status is now: NodeReady
	*   Normal  Starting                 95s                  kube-proxy  Starting kube-proxy.
	*   Normal  Starting                 50s                  kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  50s (x8 over 50s)    kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    50s (x8 over 50s)    kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     50s (x7 over 50s)    kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  50s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 37s                  kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [Nov 9 21:38] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:39] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.249210] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.110709] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +27.722706] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:40] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:41] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +4.475323] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:42] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +28.080349] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:44] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +34.380797] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +1.362576] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:45] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +8.618364] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:46] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +10.217485] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:48] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +7.704784] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	* [  +0.000035] ll header: 00000000: ff ff ff ff ff ff 0e ae 30 cf cc 1c 08 06        ........0.....
	* [  +0.000006] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	* [  +0.000001] ll header: 00000000: ff ff ff ff ff ff 0e ae 30 cf cc 1c 08 06        ........0.....
	* [  +6.373099] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +9.034581] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth9dbc4a3c
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 95 d6 6a 72 32 08 06        ......B..jr2..
	* 
	* ==> etcd [7a246252cc2b] <==
	* 2020-11-09 21:48:37.847929 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.464744505s) to execute
	* 2020-11-09 21:48:37.848250 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.384573932s) to execute
	* 2020-11-09 21:48:37.848286 W | etcdserver: read-only range request "key:\"/registry/clusterroles/view\" " with result "range_response_count:1 size:2042" took too long (1.467336756s) to execute
	* 2020-11-09 21:48:38.363380 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:48:43.686009 W | wal: sync duration of 2.124687288s, expected less than 1s
	* 2020-11-09 21:48:44.577054 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	* 2020-11-09 21:48:46.022072 W | etcdserver: request "header:<ID:17017825011350935536 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:1282 >> failure:<>>" with result "size:16" took too long (2.335655319s) to execute
	* 2020-11-09 21:48:46.022590 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/deployment-controller\" " with result "range_response_count:1 size:260" took too long (4.193364223s) to execute
	* 2020-11-09 21:48:49.529545 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (2.000121745s) to execute
	* WARNING: 2020/11/09 21:48:49 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:48:49.649874 W | wal: sync duration of 5.963663801s, expected less than 1s
	* 2020-11-09 21:48:51.411280 W | wal: sync duration of 1.76121557s, expected less than 1s
	* 2020-11-09 21:48:52.695899 W | etcdserver: failed to revoke 6c2b75aefa2c1701 ("etcdserver: request timed out")
	* 2020-11-09 21:48:54.577159 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	* 2020-11-09 21:48:56.692649 W | etcdserver: request "header:<ID:17017825011350935540 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" mod_revision:460 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" value_size:611 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" > >>" with result "size:16" took too long (7.0424289s) to execute
	* 2020-11-09 21:48:56.693254 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-f9fd979d6-7gxsc\" " with result "range_response_count:1 size:4826" took too long (10.664308788s) to execute
	* 2020-11-09 21:48:59.360981 W | wal: sync duration of 5.568261579s, expected less than 1s
	* 2020-11-09 21:48:59.367852 W | etcdserver: failed to apply request "header:<ID:17017825011350935545 > lease_revoke:<id:6c2b75aefa2c1701>" with response "size:29" took (6.576412ms) to execute, err is lease not found
	* 2020-11-09 21:48:59.368464 W | etcdserver: failed to revoke 6c2b75aefa2c1701 ("lease not found")
	* 2020-11-09 21:48:59.370203 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (10.883156307s) to execute
	* 2020-11-09 21:48:59.370485 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/expand-controller\" " with result "range_response_count:1 size:248" took too long (13.334910036s) to execute
	* 2020-11-09 21:48:59.382099 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.852994849s) to execute
	* 2020-11-09 21:48:59.382564 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" " with result "range_response_count:1 size:700" took too long (2.682173737s) to execute
	* 2020-11-09 21:49:03.576953 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:49:13.577519 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> etcd [acf3b4e626de] <==
	* 2020-11-09 21:47:24.983913 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.392575696s) to execute
	* 2020-11-09 21:47:28.561999 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:47:30.447060 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	* 2020-11-09 21:47:32.779226 W | wal: sync duration of 4.183418577s, expected less than 1s
	* 2020-11-09 21:47:36.019194 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.999976953s) to execute
	* WARNING: 2020/11/09 21:47:36 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:47:36.375916 W | wal: sync duration of 3.588582222s, expected less than 1s
	* 2020-11-09 21:47:36.376003 W | etcdserver: read-only range request "key:\"/registry/clusterroles/admin\" " with result "range_response_count:1 size:3325" took too long (7.717089569s) to execute
	* 2020-11-09 21:47:36.376245 W | etcdserver: request "header:<ID:17017825011331503654 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-node-lease/default\" mod_revision:324 > success:<request_put:<key:\"/registry/serviceaccounts/kube-node-lease/default\" value_size:153 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-node-lease/default\" > >>" with result "size:16" took too long (3.596533742s) to execute
	* 2020-11-09 21:47:36.382476 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:209" took too long (7.425218785s) to execute
	* 2020-11-09 21:47:36.383144 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-20201109134632-342799\" " with result "range_response_count:1 size:6056" took too long (3.963759962s) to execute
	* 2020-11-09 21:47:36.383666 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (6.28425769s) to execute
	* 2020-11-09 21:47:36.384019 W | etcdserver: read-only range request "key:\"/registry/minions/embed-certs-20201109134632-342799\" " with result "range_response_count:1 size:5369" took too long (7.260060034s) to execute
	* 2020-11-09 21:47:39.406440 W | wal: sync duration of 3.001662735s, expected less than 1s
	* 2020-11-09 21:47:39.408836 W | etcdserver: read-only range request "key:\"/registry/secrets/kube-node-lease/default-token-v7dj8\" " with result "range_response_count:1 size:2713" took too long (3.743879999s) to execute
	* 2020-11-09 21:47:39.414118 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions/kube-system/kube-proxy-744c595cb\" " with result "range_response_count:1 size:2286" took too long (3.029434988s) to execute
	* 2020-11-09 21:47:39.417400 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (2.615649154s) to execute
	* 2020-11-09 21:47:39.417911 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:610" took too long (3.022808797s) to execute
	* 2020-11-09 21:47:39.418244 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3479" took too long (3.024365648s) to execute
	* 2020-11-09 21:47:39.455131 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:47:49.447660 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:47:54.784313 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2170" took too long (205.767195ms) to execute
	* 2020-11-09 21:47:58.091336 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/09 21:47:58 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 2020-11-09 21:47:58.167416 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> kernel <==
	*  21:49:16 up  1:31,  0 users,  load average: 6.49, 9.87, 8.24
	* Linux embed-certs-20201109134632-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [5f47fadca78c] <==
	* W1109 21:48:07.451036       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.452145       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.520644       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.539814       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.635005       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.656323       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.682788       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.720591       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.733568       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.747490       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.751040       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.772622       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.775952       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.789969       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.800307       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.801544       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.819095       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.851023       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.855278       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.904057       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.936149       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:08.002119       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:08.074615       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:08.163079       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:08.174808       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kube-apiserver [9cf56787377b] <==
	* I1109 21:48:59.371399       1 trace.go:205] Trace[1733382738]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/expand-controller,user-agent:kube-controller-manager/v1.19.2 (linux/amd64) kubernetes/f574309/kube-controller-manager,client:192.168.82.16 (09-Nov-2020 21:48:46.034) (total time: 13336ms):
	* Trace[1733382738]: ---"About to write a response" 13336ms (21:48:00.371)
	* Trace[1733382738]: [13.336453324s] [13.336453324s] END
	* I1109 21:48:59.371758       1 trace.go:205] Trace[804655931]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.19.2 (linux/amd64) kubernetes/f574309,client:192.168.82.16 (09-Nov-2020 21:48:53.032) (total time: 6338ms):
	* Trace[804655931]: ---"Object stored in database" 6338ms (21:48:00.371)
	* Trace[804655931]: [6.338782316s] [6.338782316s] END
	* I1109 21:48:59.371400       1 trace.go:205] Trace[726476451]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.19.2 (linux/amd64) kubernetes/f574309,client:127.0.0.1 (09-Nov-2020 21:48:48.486) (total time: 10884ms):
	* Trace[726476451]: ---"About to write a response" 10884ms (21:48:00.371)
	* Trace[726476451]: [10.884852756s] [10.884852756s] END
	* I1109 21:48:59.374273       1 trace.go:205] Trace[1191387262]: "GuaranteedUpdate etcd3" type:*core.Pod (09-Nov-2020 21:48:56.697) (total time: 2676ms):
	* Trace[1191387262]: ---"Transaction committed" 2673ms (21:48:00.374)
	* Trace[1191387262]: [2.676457365s] [2.676457365s] END
	* I1109 21:48:59.374643       1 trace.go:205] Trace[574079011]: "Patch" url:/api/v1/namespaces/kube-system/pods/coredns-f9fd979d6-7gxsc/status,user-agent:kubelet/v1.19.2 (linux/amd64) kubernetes/f574309,client:192.168.82.16 (09-Nov-2020 21:48:56.697) (total time: 2677ms):
	* Trace[574079011]: ---"Object stored in database" 2674ms (21:48:00.374)
	* Trace[574079011]: [2.677033864s] [2.677033864s] END
	* I1109 21:48:59.386693       1 trace.go:205] Trace[2076754026]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-20201109134632-342799,user-agent:kubelet/v1.19.2 (linux/amd64) kubernetes/f574309,client:192.168.82.16 (09-Nov-2020 21:48:56.699) (total time: 2686ms):
	* Trace[2076754026]: ---"About to write a response" 2686ms (21:48:00.386)
	* Trace[2076754026]: [2.686926545s] [2.686926545s] END
	* I1109 21:49:00.126617       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	* I1109 21:49:00.164245       1 controller.go:606] quota admission added evaluator for: endpoints
	* I1109 21:49:00.337741       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	* I1109 21:49:00.475577       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	* I1109 21:49:08.605706       1 client.go:360] parsed scheme: "passthrough"
	* I1109 21:49:08.605772       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I1109 21:49:08.605796       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [1ac183bbf19f] <==
	* I1109 21:49:00.166290       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	* I1109 21:49:00.174492       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	* I1109 21:49:00.175312       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	* I1109 21:49:00.181042       1 shared_informer.go:247] Caches are synced for GC 
	* I1109 21:49:00.274976       1 shared_informer.go:247] Caches are synced for attach detach 
	* I1109 21:49:00.326637       1 shared_informer.go:247] Caches are synced for taint 
	* I1109 21:49:00.326787       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* I1109 21:49:00.326813       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* W1109 21:49:00.326872       1 node_lifecycle_controller.go:1044] Missing timestamp for Node embed-certs-20201109134632-342799. Assuming now as a timestamp.
	* I1109 21:49:00.326912       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I1109 21:49:00.326972       1 event.go:291] "Event occurred" object="embed-certs-20201109134632-342799" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20201109134632-342799 event: Registered Node embed-certs-20201109134632-342799 in Controller"
	* I1109 21:49:00.333990       1 shared_informer.go:247] Caches are synced for deployment 
	* I1109 21:49:00.344895       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-584f46694c to 1"
	* I1109 21:49:00.345583       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-c95fcf479 to 1"
	* I1109 21:49:00.372496       1 shared_informer.go:247] Caches are synced for disruption 
	* I1109 21:49:00.372534       1 disruption.go:339] Sending events to api server.
	* I1109 21:49:00.384108       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:49:00.418432       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	* I1109 21:49:00.427998       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:49:00.451625       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c95fcf479-zw8m7"
	* I1109 21:49:00.452399       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-584f46694c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-584f46694c-nqgzr"
	* I1109 21:49:00.461970       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:49:00.762257       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:49:00.768472       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:49:00.768504       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* 
	* ==> kube-controller-manager [571aac763dbf] <==
	* I1109 21:47:28.790518       1 shared_informer.go:247] Caches are synced for expand 
	* I1109 21:47:28.790743       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I1109 21:47:28.795957       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:47:29.096238       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:47:29.122176       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:47:29.122215       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:47:29.440324       1 request.go:645] Throttling request took 1.048651161s, request: GET:https://192.168.82.16:8443/apis/events.k8s.io/v1?timeout=32s
	* I1109 21:47:30.142759       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	* I1109 21:47:30.142815       1 shared_informer.go:247] Caches are synced for resource quota 
	* E1109 21:47:35.663234       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"embed-certs-20201109134632-342799.1645f534a389ce8a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"embed-certs-20201109134632-342799", UID:"7d77e4ed-7aec-495f-a84f-f2914a791bd5", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"RegisteredNode", Message:"Node embed-certs-20201109134632-342799 event: Registered Node embed-certs-20201109134632-342799 in Controller", Source:v
1.EventSource{Component:"node-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfe28c1c273c6e8a, ext:16962091427, loc:(*time.Location)(0x6a59c80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfe28c1c273c6e8a, ext:16962091427, loc:(*time.Location)(0x6a59c80)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
	* E1109 21:47:35.668384       1 daemon_controller.go:320] kube-system/kube-proxy failed with : failed to construct revisions of DaemonSet: etcdserver: request timed out
	* E1109 21:47:35.670529       1 controller_utils.go:231] unable to update labels map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux] for Node "embed-certs-20201109134632-342799": failed to patch the node: etcdserver: request timed out
	* E1109 21:47:35.670869       1 node_lifecycle_controller.go:607] Failed to reconcile labels for node <embed-certs-20201109134632-342799>, requeue it: failed update labels for node &Node{ObjectMeta:{embed-certs-20201109134632-342799   /api/v1/nodes/embed-certs-20201109134632-342799 7d77e4ed-7aec-495f-a84f-f2914a791bd5 249 0 2020-11-09 21:47:18 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:embed-certs-20201109134632-342799 kubernetes.io/os:linux minikube.k8s.io/commit:21ac2a6a37964be4739a8be2fb5a50a8d224597d minikube.k8s.io/name:embed-certs-20201109134632-342799 minikube.k8s.io/updated_at:2020_11_09T13_47_21_0700 minikube.k8s.io/version:v1.14.2 node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-11-09 21:47:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kub
ernetes.io/master":{}}}}} {kubectl-label Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:minikube.k8s.io/commit":{},"f:minikube.k8s.io/name":{},"f:minikube.k8s.io/updated_at":{},"f:minikube.k8s.io/version":{}}}}} {kubelet Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTran
sitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus
{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,
Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:22 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.82.16,},NodeAddress{Type:Hostname,Address:embed-certs-20201109134632-342799,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d4c0807a7121464481bb577d74022f0b,SystemUUID:23a0b8f9-e92b-4fd4-ae6c-7935ad2706cf,BootID:9ad1ab50-5be9-48e2-8ae1-dc31113bc120,KernelVersion:4.9.0-14-amd64,OSImage:Ubuntu 20.04.1 LTS,ContainerRuntimeVersion:docker://19.3.13,KubeletVersion:v1.19.2,KubeProxyVersion:v1.19.2,Oper
atingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard@sha256:45ef224759bc50c84445f233fffae4aa3bdaec705cb5ee4bfe36d183b270b45d kubernetesui/dashboard:v2.0.3],SizeBytes:224634157,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:fc905eab708c6abbdf0ef0d47667592b948fea3adf31d71b19b5205340d00011 k8s.gcr.io/kube-apiserver:v1.19.2],SizeBytes:118778218,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:fa7c9d19680704e246873eb600c02fa95167d5c58e56d56ba9ed30b7c4150ac1 k8s.gcr.io/kube-proxy:v1.19.2],SizeBytes:117686573,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:c94b98d9f79bdfe33010c313891d99ed50858d6f04ceef865e7904c338dad913 k8s.gcr.io/kube-controller-manager:v1.19.2],SizeBytes:110778730,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb058c7394fad4d968d366b8b372698a1144a
1c3c6de52cdf46ff050ccfd31ff k8s.gcr.io/kube-scheduler:v1.19.2],SizeBytes:45656426,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf kubernetesui/metrics-scraper:v1.0.4],SizeBytes:36937728,},ContainerImage{Names:[gcr.io/k8s-minikube/storage-provisioner@sha256:5d8c9e69200846ff740bca872d681d2a736014386e4006fd26c4bf24ef7813ec gcr.io/k8s-minikube/storage-provisioner:v3],SizeBytes:29667328,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[minikube-local-cache-test:functional-20201109132758-342799],SizeBytes:30,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
	* E1109 21:47:35.683303       1 ttl_controller.go:220] etcdserver: request timed out
	* W1109 21:47:35.694645       1 endpointslice_controller.go:284] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: failed to create EndpointSlice for Service kube-system/kube-dns: etcdserver: request timed out
	* I1109 21:47:35.694823       1 event.go:291] "Event occurred" object="kube-system/kube-dns" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service kube-system/kube-dns: failed to create EndpointSlice for Service kube-system/kube-dns: etcdserver: request timed out"
	* E1109 21:47:36.381120       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	* I1109 21:47:36.386534       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Warning" reason="ReplicaSetCreateError" message="Failed to create new replica set \"coredns-f9fd979d6\": etcdserver: request timed out"
	* I1109 21:47:39.421022       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-7gxsc"
	* I1109 21:47:39.466610       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-hz9zv"
	* E1109 21:47:39.467168       1 tokens_controller.go:261] error synchronizing serviceaccount kube-node-lease/default: etcdserver: request timed out
	* I1109 21:47:39.498774       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8j529"
	* I1109 21:47:39.884325       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-f9fd979d6 to 1"
	* I1109 21:47:40.079782       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-f9fd979d6-hz9zv"
	* I1109 21:47:43.659662       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* 
	* ==> kube-proxy [0497c03cee18] <==
	* I1109 21:47:41.260238       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:47:41.260371       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:47:41.324964       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:47:41.325105       1 server_others.go:186] Using iptables Proxier.
	* W1109 21:47:41.325167       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1109 21:47:41.325208       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1109 21:47:41.325650       1 server.go:650] Version: v1.19.2
	* I1109 21:47:41.326341       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:47:41.326579       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:47:41.326675       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:47:41.327004       1 config.go:315] Starting service config controller
	* I1109 21:47:41.327032       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:47:41.327075       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:47:41.327067       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:47:41.427318       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I1109 21:47:41.427320       1 shared_informer.go:247] Caches are synced for service config 
	* 
	* ==> kube-proxy [e73b51e98fd2] <==
	* I1109 21:48:39.059346       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:48:39.059834       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:48:39.182518       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:48:39.182640       1 server_others.go:186] Using iptables Proxier.
	* W1109 21:48:39.182655       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1109 21:48:39.182661       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1109 21:48:39.182975       1 server.go:650] Version: v1.19.2
	* I1109 21:48:39.183647       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:48:39.183813       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:48:39.183944       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:48:39.184145       1 config.go:315] Starting service config controller
	* I1109 21:48:39.184168       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:48:39.184217       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:48:39.184230       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:48:39.284489       1 shared_informer.go:247] Caches are synced for service config 
	* I1109 21:48:39.284556       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-scheduler [1e430985e895] <==
	* I1109 21:47:18.371790       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:47:18.371812       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:47:18.371898       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E1109 21:47:18.380092       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:47:18.380505       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:47:18.380960       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:47:18.381517       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:18.382034       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:47:18.382265       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:47:18.382384       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:47:18.382461       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:47:18.382540       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:47:18.382708       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:47:18.382796       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:47:18.382937       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:47:18.382967       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:19.265518       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:47:19.363478       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:47:19.377101       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:19.377447       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:47:19.424091       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:47:19.507148       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:19.666023       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:47:19.683829       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* I1109 21:47:22.372226       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kube-scheduler [c7a8df7cab25] <==
	* I1109 21:48:28.407698       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:48:28.407782       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:48:30.093572       1 serving.go:331] Generated self-signed cert in-memory
	* W1109 21:48:35.363283       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1109 21:48:35.363339       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W1109 21:48:35.363352       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1109 21:48:35.363360       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1109 21:48:35.475503       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:48:35.475538       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:48:35.479872       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:48:35.479933       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:48:35.480498       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1109 21:48:35.481568       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1109 21:48:35.581291       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:48:11 UTC, end at Mon 2020-11-09 21:49:18 UTC. --
	* Nov 09 21:48:40 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:48:40.496092    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-7gxsc through plugin: invalid network status for
	* Nov 09 21:48:48 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:48.086202    1190 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:48:48 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:48.086239    1190 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:48:52 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:52.489378    1190 controller.go:178] failed to update node lease, error: etcdserver: request timed out
	* Nov 09 21:48:53 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:53.031285    1190 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-embed-certs-20201109134632-342799.1645f54240183408", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-embed-certs-20201109134632-342799", UID:"9447b7084db297c9714ad3f013f933da", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"SandboxChanged", Message:"Pod sandbox cha
nged, it will be killed and re-created.", Source:v1.EventSource{Component:"kubelet", Host:"embed-certs-20201109134632-342799"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfe28c2ac71e4608, ext:1509486595, loc:(*time.Location)(0x6cf5c60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfe28c2ac71e4608, ext:1509486595, loc:(*time.Location)(0x6cf5c60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
	* Nov 09 21:48:56 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:56.699075    1190 controller.go:178] failed to update node lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "embed-certs-20201109134632-342799": the object has been modified; please apply your changes to the latest version and try again
	* Nov 09 21:48:58 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:58.097199    1190 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:48:58 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:58.097246    1190 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.480143    1190 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.520244    1190 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.628898    1190 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/10004477-735a-4d58-95eb-eb8d770a6035-tmp-volume") pod "dashboard-metrics-scraper-c95fcf479-zw8m7" (UID: "10004477-735a-4d58-95eb-eb8d770a6035")
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.628952    1190 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1f496dcb-22d1-4c9f-90bb-2b9130632b61-tmp-volume") pod "kubernetes-dashboard-584f46694c-nqgzr" (UID: "1f496dcb-22d1-4c9f-90bb-2b9130632b61")
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.628993    1190 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-5mfvs" (UniqueName: "kubernetes.io/secret/1f496dcb-22d1-4c9f-90bb-2b9130632b61-kubernetes-dashboard-token-5mfvs") pod "kubernetes-dashboard-584f46694c-nqgzr" (UID: "1f496dcb-22d1-4c9f-90bb-2b9130632b61")
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.629067    1190 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-5mfvs" (UniqueName: "kubernetes.io/secret/10004477-735a-4d58-95eb-eb8d770a6035-kubernetes-dashboard-token-5mfvs") pod "dashboard-metrics-scraper-c95fcf479-zw8m7" (UID: "10004477-735a-4d58-95eb-eb8d770a6035")
	* Nov 09 21:49:01 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:01.519083    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-zw8m7 through plugin: invalid network status for
	* Nov 09 21:49:01 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:01.597676    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-nqgzr through plugin: invalid network status for
	* Nov 09 21:49:01 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:01.794233    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-zw8m7 through plugin: invalid network status for
	* Nov 09 21:49:01 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:01.844561    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-nqgzr through plugin: invalid network status for
	* Nov 09 21:49:02 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:02.909450    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-zw8m7 through plugin: invalid network status for
	* Nov 09 21:49:02 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:02.919615    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-nqgzr through plugin: invalid network status for
	* Nov 09 21:49:08 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:49:08.112573    1190 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:49:08 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:49:08.112643    1190 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:49:10 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:10.013114    1190 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 572d2f66874bf142a9732605494fd31cdca81bff7e3821991815dbb5643a24bb
	* Nov 09 21:49:10 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:10.013600    1190 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 824d17be8ab7f33eba6f1605b8a0fc8992d3f3f4ea59f7ac4283d68ea500e798
	* Nov 09 21:49:10 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:49:10.013964    1190 pod_workers.go:191] Error syncing pod f8f2dc83-2275-4184-b931-8767527fb343 ("storage-provisioner_kube-system(f8f2dc83-2275-4184-b931-8767527fb343)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f8f2dc83-2275-4184-b931-8767527fb343)"
	* 
	* ==> kubernetes-dashboard [e8b8095c5dce] <==
	* 2020/11/09 21:49:01 Starting overwatch
	* 2020/11/09 21:49:01 Using namespace: kubernetes-dashboard
	* 2020/11/09 21:49:01 Using in-cluster config to connect to apiserver
	* 2020/11/09 21:49:01 Using secret token for csrf signing
	* 2020/11/09 21:49:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	* 2020/11/09 21:49:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	* 2020/11/09 21:49:01 Successful initial request to the apiserver, version: v1.19.2
	* 2020/11/09 21:49:01 Generating JWE encryption key
	* 2020/11/09 21:49:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	* 2020/11/09 21:49:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	* 2020/11/09 21:49:02 Initializing JWE encryption key from synchronized object
	* 2020/11/09 21:49:02 Creating in-cluster Sidecar client
	* 2020/11/09 21:49:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	* 2020/11/09 21:49:02 Serving insecurely on HTTP port: 9090
	* 
	* ==> storage-provisioner [824d17be8ab7] <==
	* F1109 21:49:09.164907       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:49:16.676244  547616 out.go:286] unable to execute * 2020-11-09 21:48:46.022072 W | etcdserver: request "header:<ID:17017825011350935536 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:1282 >> failure:<>>" with result "size:16" took too long (2.335655319s) to execute
	: html/template:* 2020-11-09 21:48:46.022072 W | etcdserver: request "header:<ID:17017825011350935536 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:1282 >> failure:<>>" with result "size:16" took too long (2.335655319s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:49:16.705939  547616 out.go:286] unable to execute * 2020-11-09 21:48:56.692649 W | etcdserver: request "header:<ID:17017825011350935540 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" mod_revision:460 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" value_size:611 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" > >>" with result "size:16" took too long (7.0424289s) to execute
	: html/template:* 2020-11-09 21:48:56.692649 W | etcdserver: request "header:<ID:17017825011350935540 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" mod_revision:460 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" value_size:611 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" > >>" with result "size:16" took too long (7.0424289s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:49:16.842981  547616 out.go:286] unable to execute * 2020-11-09 21:47:36.376245 W | etcdserver: request "header:<ID:17017825011331503654 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-node-lease/default\" mod_revision:324 > success:<request_put:<key:\"/registry/serviceaccounts/kube-node-lease/default\" value_size:153 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-node-lease/default\" > >>" with result "size:16" took too long (3.596533742s) to execute
	: html/template:* 2020-11-09 21:47:36.376245 W | etcdserver: request "header:<ID:17017825011331503654 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-node-lease/default\" mod_revision:324 > success:<request_put:<key:\"/registry/serviceaccounts/kube-node-lease/default\" value_size:153 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-node-lease/default\" > >>" with result "size:16" took too long (3.596533742s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:49:17.500469  547616 out.go:281] unable to parse "* E1109 21:47:35.670869       1 node_lifecycle_controller.go:607] Failed to reconcile labels for node <embed-certs-20201109134632-342799>, requeue it: failed update labels for node &Node{ObjectMeta:{embed-certs-20201109134632-342799   /api/v1/nodes/embed-certs-20201109134632-342799 7d77e4ed-7aec-495f-a84f-f2914a791bd5 249 0 2020-11-09 21:47:18 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:embed-certs-20201109134632-342799 kubernetes.io/os:linux minikube.k8s.io/commit:21ac2a6a37964be4739a8be2fb5a50a8d224597d minikube.k8s.io/name:embed-certs-20201109134632-342799 minikube.k8s.io/updated_at:2020_11_09T13_47_21_0700 minikube.k8s.io/version:v1.14.2 node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-11-09 21:47:21 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\"f:kubeadm.
alpha.kubernetes.io/cri-socket\":{}},\"f:labels\":{\"f:node-role.kubernetes.io/master\":{}}}}} {kubectl-label Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\"f:minikube.k8s.io/commit\":{},\"f:minikube.k8s.io/name\":{},\"f:minikube.k8s.io/updated_at\":{},\"f:minikube.k8s.io/version\":{}}}}} {kubelet Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:volumes.kubernetes.io/controller-managed-attach-detach\":{}},\"f:labels\":{\".\":{},\"f:kubernetes.io/arch\":{},\"f:kubernetes.io/hostname\":{},\"f:kubernetes.io/os\":{}}},\"f:status\":{\"f:addresses\":{\".\":{},\"k:{\\\"type\\\":\\\"Hostname\\\"}\":{\".\":{},\"f:address\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"InternalIP\\\"}\":{\".\":{},\"f:address\":{},\"f:type\":{}}},\"f:allocatable\":{\".\":{},\"f:cpu\":{},\"f:ephemeral-storage\":{},\"f:hugepages-1Gi\":{},\"f:hugepages-2Mi\":{},\"f:memory\":{},\"f:pods\":{}},\"f:capacity\":{\".\":{},\"f:cpu\":{},\"f:ephemeral-storage\":{},\"f:hug
epages-1Gi\":{},\"f:hugepages-2Mi\":{},\"f:memory\":{},\"f:pods\":{}},\"f:conditions\":{\".\":{},\"k:{\\\"type\\\":\\\"DiskPressure\\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"MemoryPressure\\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"PIDPressure\\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}},\"f:daemonEndpoints\":{\"f:kubeletEndpoint\":{\"f:Port\":{}}},\"f:images\":{},\"f:nodeInfo\":{\"f:architecture\":{},\"f:bootID\":{},\"f:containerRuntimeVersion\":{},\"f:kernelVersion\":{},\"f:kubeProxyVersion\":{},\"f:kubeletVersion\":{},\"f:machin
eID\":{},\"f:operatingSystem\":{},\"f:osImage\":{},\"f:systemUUID\":{}}}}}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTran
sitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:22 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.82.16,},NodeAddress{Type:Hostname,Address:embed-certs-20201109134632-342799,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:
NodeSystemInfo{MachineID:d4c0807a7121464481bb577d74022f0b,SystemUUID:23a0b8f9-e92b-4fd4-ae6c-7935ad2706cf,BootID:9ad1ab50-5be9-48e2-8ae1-dc31113bc120,KernelVersion:4.9.0-14-amd64,OSImage:Ubuntu 20.04.1 LTS,ContainerRuntimeVersion:docker://19.3.13,KubeletVersion:v1.19.2,KubeProxyVersion:v1.19.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard@sha256:45ef224759bc50c84445f233fffae4aa3bdaec705cb5ee4bfe36d183b270b45d kubernetesui/dashboard:v2.0.3],SizeBytes:224634157,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:fc905eab708c6abbdf0ef0d47667592b948fea3adf31d71b19b5205340d00011 k8s.gcr.io/kube-apiserver:v1.19.2],SizeBytes:118778218,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:fa7c9d19680704e246873eb600c02fa95167d5c58e56d56ba9ed30b7c4150ac1 k8s.gcr.io/kube-proxy:v1.19.2],SizeBytes:
117686573,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:c94b98d9f79bdfe33010c313891d99ed50858d6f04ceef865e7904c338dad913 k8s.gcr.io/kube-controller-manager:v1.19.2],SizeBytes:110778730,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb058c7394fad4d968d366b8b372698a1144a1c3c6de52cdf46ff050ccfd31ff k8s.gcr.io/kube-scheduler:v1.19.2],SizeBytes:45656426,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf kubernetesui/metrics-scraper:v1.0.4],SizeBytes:36937728,},ContainerImage{Names:[gcr.io/k8s-minikube/storage-provisioner@sha256:5d8c9e69200846ff740bca872d681d2a736014386e4006fd26c4bf24ef7813ec gcr.io/k8s-minikube/storage-provisioner:v3],SizeBytes:29667328,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f5
48c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[minikube-local-cache-test:functional-20201109132758-342799],SizeBytes:30,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}\n": template: * E1109 21:47:35.670869       1 node_lifecycle_controller.go:607] Failed to reconcile labels for node <embed-certs-20201109134632-342799>, requeue it: failed update labels for node &Node{ObjectMeta:{embed-certs-20201109134632-342799   /api/v1/nodes/embed-certs-20201109134632-342799 7d77e4ed-7aec-495f-a84f-f2914a791bd5 249 0 2020-11-09 21:47:18 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:embed-certs-20201109134632-342799 kubernetes.io/os:linux minikube.k8s.io/commit:21ac2a6a37964be4739a8be2fb5a50a8d224597d minikube.k8s.io/name:embed-certs-20201109134632-342799 minikube.k8s.io/updated_at:2020_11_09T13_47_21_0700 minikube.k8s.io/version:v1.14.2 node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock volumes.kuber
netes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-11-09 21:47:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kubectl-label Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:minikube.k8s.io/commit":{},"f:minikube.k8s.io/name":{},"f:minikube.k8s.io/updated_at":{},"f:minikube.k8s.io/version":{}}}}} {kubelet Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:c
apacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},S
pec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Mes
sage:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:22 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.82.16,},NodeAddress{Type:Hostname,Address:embed-certs-20201109134632-342799,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d4c0807a7121464481bb577d74022f0b,SystemUUID:23a0b8f9-
e92b-4fd4-ae6c-7935ad2706cf,BootID:9ad1ab50-5be9-48e2-8ae1-dc31113bc120,KernelVersion:4.9.0-14-amd64,OSImage:Ubuntu 20.04.1 LTS,ContainerRuntimeVersion:docker://19.3.13,KubeletVersion:v1.19.2,KubeProxyVersion:v1.19.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard@sha256:45ef224759bc50c84445f233fffae4aa3bdaec705cb5ee4bfe36d183b270b45d kubernetesui/dashboard:v2.0.3],SizeBytes:224634157,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:fc905eab708c6abbdf0ef0d47667592b948fea3adf31d71b19b5205340d00011 k8s.gcr.io/kube-apiserver:v1.19.2],SizeBytes:118778218,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:fa7c9d19680704e246873eb600c02fa95167d5c58e56d56ba9ed30b7c4150ac1 k8s.gcr.io/kube-proxy:v1.19.2],SizeBytes:117686573,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:c9
4b98d9f79bdfe33010c313891d99ed50858d6f04ceef865e7904c338dad913 k8s.gcr.io/kube-controller-manager:v1.19.2],SizeBytes:110778730,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb058c7394fad4d968d366b8b372698a1144a1c3c6de52cdf46ff050ccfd31ff k8s.gcr.io/kube-scheduler:v1.19.2],SizeBytes:45656426,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf kubernetesui/metrics-scraper:v1.0.4],SizeBytes:36937728,},ContainerImage{Names:[gcr.io/k8s-minikube/storage-provisioner@sha256:5d8c9e69200846ff740bca872d681d2a736014386e4006fd26c4bf24ef7813ec gcr.io/k8s-minikube/storage-provisioner:v3],SizeBytes:29667328,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[minik
ube-local-cache-test:functional-20201109132758-342799],SizeBytes:30,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
	:1: unexpected "}" in operand - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799
helpers_test.go:255: (dbg) Run:  kubectl --context embed-certs-20201109134632-342799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context embed-certs-20201109134632-342799 describe pod 
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context embed-certs-20201109134632-342799 describe pod : exit status 1 (89.128665ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context embed-certs-20201109134632-342799 describe pod : exit status 1
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect embed-certs-20201109134632-342799
helpers_test.go:229: (dbg) docker inspect embed-certs-20201109134632-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c",
	        "Created": "2020-11-09T21:46:35.169197379Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 537246,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:48:10.761195454Z",
	            "FinishedAt": "2020-11-09T21:48:08.645832477Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c/hosts",
	        "LogPath": "/var/lib/docker/containers/2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c/2ab3e54942ab7d5269f808f678846e8f87a37832d551a65c89473e5610740c0c-json.log",
	        "Name": "/embed-certs-20201109134632-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20201109134632-342799:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20201109134632-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c485e1cf0e1c0772684f642708fce9e29b43d8788cec5b73985963327c635851-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c485e1cf0e1c0772684f642708fce9e29b43d8788cec5b73985963327c635851/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c485e1cf0e1c0772684f642708fce9e29b43d8788cec5b73985963327c635851/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c485e1cf0e1c0772684f642708fce9e29b43d8788cec5b73985963327c635851/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20201109134632-342799",
	                "Source": "/var/lib/docker/volumes/embed-certs-20201109134632-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20201109134632-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20201109134632-342799",
	                "name.minikube.sigs.k8s.io": "embed-certs-20201109134632-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "feba36f89d2e264ca4e96e48cb6c266b5e4bfb878f049b9ad1485995fe4e8f2b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/feba36f89d2e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20201109134632-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.82.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2ab3e54942ab"
	                    ],
	                    "NetworkID": "7d4a4069a7a39b32a908237208688e65956395417594330a38f4e30f4264b15e",
	                    "EndpointID": "355aad7c29d5f9da5b6acb40ce0b830d8b3d2ca1aa8fde75ea0f8b5a6b881e8b",
	                    "Gateway": "192.168.82.1",
	                    "IPAddress": "192.168.82.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:52:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
helpers_test.go:238: <<< TestStartStop/group/embed-certs/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20201109134632-342799 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20201109134632-342799 logs -n 25: (3.491001168s)
helpers_test.go:246: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:48:11 UTC, end at Mon 2020-11-09 21:49:20 UTC. --
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 systemd[1]: Stopped Docker Application Container Engine.
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 systemd[1]: Starting Docker Application Container Engine...
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.926996416Z" level=info msg="Starting up"
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.929528871Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.929566395Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.929607422Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.929621697Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.931480953Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.931527972Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.931556550Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.931577379Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.955262757Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.966070426Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.966107766Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.966116559Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 09 21:48:21 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:21.966311872Z" level=info msg="Loading containers: start."
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.155133270Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.217051712Z" level=info msg="Loading containers: done."
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.254707158Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.254829293Z" level=info msg="Daemon has completed initialization"
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.272067536Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:22.272087534Z" level=info msg="API listen on [::]:2376"
	* Nov 09 21:48:22 embed-certs-20201109134632-342799 systemd[1]: Started Docker Application Container Engine.
	* Nov 09 21:48:38 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:48:38.981010582Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Nov 09 21:49:09 embed-certs-20201109134632-342799 dockerd[531]: time="2020-11-09T21:49:09.261187279Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                             CREATED              STATE               NAME                        ATTEMPT             POD ID
	* cfbccfa0e96b2       86262685d9abb                                                                     19 seconds ago       Running             dashboard-metrics-scraper   0                   7d511e9327215
	* e8b8095c5dcec       503bc4b7440b9                                                                     19 seconds ago       Running             kubernetes-dashboard        0                   79ad0b0c78dcb
	* 824d17be8ab7f       bad58561c4be7                                                                     42 seconds ago       Exited              storage-provisioner         1                   aaa2d57b38eca
	* e73b51e98fd2f       d373dd5a8593a                                                                     42 seconds ago       Running             kube-proxy                  1                   f6d7e33736539
	* 260a2b944f295       56cc512116c8f                                                                     42 seconds ago       Running             busybox                     1                   657250caff984
	* f5a56caffa066       bfe3a36ebd252                                                                     42 seconds ago       Running             coredns                     1                   59afee3de02ae
	* 7a246252cc2b4       0369cf4303ffd                                                                     53 seconds ago       Running             etcd                        1                   37590f6aabdba
	* c7a8df7cab257       2f32d66b884f8                                                                     53 seconds ago       Running             kube-scheduler              1                   a639ed0109409
	* 9cf56787377be       607331163122e                                                                     53 seconds ago       Running             kube-apiserver              1                   a64452fc3b7ea
	* 1ac183bbf19f7       8603821e1a7a5                                                                     53 seconds ago       Running             kube-controller-manager     1                   488d251e9411e
	* 56f169b029f63       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1   About a minute ago   Exited              busybox                     0                   838c43ab87ded
	* f493dd916a998       bfe3a36ebd252                                                                     About a minute ago   Exited              coredns                     0                   6a88b05c92729
	* 0497c03cee18b       d373dd5a8593a                                                                     About a minute ago   Exited              kube-proxy                  0                   8b0754311bc40
	* acf3b4e626dec       0369cf4303ffd                                                                     2 minutes ago        Exited              etcd                        0                   7a52c10f15778
	* 1e430985e8952       2f32d66b884f8                                                                     2 minutes ago        Exited              kube-scheduler              0                   981384ffa04eb
	* 571aac763dbf2       8603821e1a7a5                                                                     2 minutes ago        Exited              kube-controller-manager     0                   1a652bb6cc724
	* 5f47fadca78c3       607331163122e                                                                     2 minutes ago        Exited              kube-apiserver              0                   0241eed42b3e4
	* 
	* ==> coredns [f493dd916a99] <==
	* E1109 21:47:58.182967       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=458&timeout=7m7s&timeoutSeconds=427&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:47:58.182980       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=156&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:47:58.182980       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=220&timeout=8m26s&timeoutSeconds=506&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* 
	* ==> coredns [f5a56caffa06] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20201109134632-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=embed-certs-20201109134632-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=embed-certs-20201109134632-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_47_21_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:47:18 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  embed-certs-20201109134632-342799
	*   AcquireTime:     <unset>
	*   RenewTime:       Mon, 09 Nov 2020 21:49:19 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:48:35 +0000   Mon, 09 Nov 2020 21:47:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:48:35 +0000   Mon, 09 Nov 2020 21:47:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:48:35 +0000   Mon, 09 Nov 2020 21:47:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:48:35 +0000   Mon, 09 Nov 2020 21:47:32 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.82.16
	*   Hostname:    embed-certs-20201109134632-342799
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 ff05727156aa489e8256f49dc42ef0bf
	*   System UUID:                23a0b8f9-e92b-4fd4-ae6c-7935ad2706cf
	*   Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.13
	*   Kubelet Version:            v1.19.2
	*   Kube-Proxy Version:         v1.19.2
	* Non-terminated Pods:          (10 in total)
	*   Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	*   default                     busybox                                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	*   kube-system                 coredns-f9fd979d6-7gxsc                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	*   kube-system                 etcd-embed-certs-20201109134632-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	*   kube-system                 kube-apiserver-embed-certs-20201109134632-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	*   kube-system                 kube-controller-manager-embed-certs-20201109134632-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	*   kube-system                 kube-proxy-8j529                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	*   kube-system                 kube-scheduler-embed-certs-20201109134632-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	*   kube-system                 storage-provisioner                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	*   kubernetes-dashboard        dashboard-metrics-scraper-c95fcf479-zw8m7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	*   kubernetes-dashboard        kubernetes-dashboard-584f46694c-nqgzr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                650m (8%)  0 (0%)
	*   memory             70Mi (0%)  170Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                    From        Message
	*   ----    ------                   ----                   ----        -------
	*   Normal  Starting                 2m10s                  kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  2m10s (x4 over 2m10s)  kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2m10s (x3 over 2m10s)  kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2m10s (x3 over 2m10s)  kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  2m10s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 118s                   kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  118s                   kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    118s                   kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     118s                   kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             118s                   kubelet     Node embed-certs-20201109134632-342799 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  117s                   kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                108s                   kubelet     Node embed-certs-20201109134632-342799 status is now: NodeReady
	*   Normal  Starting                 99s                    kube-proxy  Starting kube-proxy.
	*   Normal  Starting                 54s                    kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  54s (x8 over 54s)      kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    54s (x8 over 54s)      kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     54s (x7 over 54s)      kubelet     Node embed-certs-20201109134632-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  54s                    kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 41s                    kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [Nov 9 21:38] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:39] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.249210] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.110709] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +27.722706] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:40] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:41] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +4.475323] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:42] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +28.080349] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:44] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +34.380797] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +1.362576] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:45] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +8.618364] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:46] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +10.217485] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:48] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +7.704784] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	* [  +0.000035] ll header: 00000000: ff ff ff ff ff ff 0e ae 30 cf cc 1c 08 06        ........0.....
	* [  +0.000006] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	* [  +0.000001] ll header: 00000000: ff ff ff ff ff ff 0e ae 30 cf cc 1c 08 06        ........0.....
	* [  +6.373099] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +9.034581] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth9dbc4a3c
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 95 d6 6a 72 32 08 06        ......B..jr2..
	* 
	* ==> etcd [7a246252cc2b] <==
	* 2020-11-09 21:48:37.847929 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.464744505s) to execute
	* 2020-11-09 21:48:37.848250 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.384573932s) to execute
	* 2020-11-09 21:48:37.848286 W | etcdserver: read-only range request "key:\"/registry/clusterroles/view\" " with result "range_response_count:1 size:2042" took too long (1.467336756s) to execute
	* 2020-11-09 21:48:38.363380 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:48:43.686009 W | wal: sync duration of 2.124687288s, expected less than 1s
	* 2020-11-09 21:48:44.577054 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	* 2020-11-09 21:48:46.022072 W | etcdserver: request "header:<ID:17017825011350935536 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:1282 >> failure:<>>" with result "size:16" took too long (2.335655319s) to execute
	* 2020-11-09 21:48:46.022590 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/deployment-controller\" " with result "range_response_count:1 size:260" took too long (4.193364223s) to execute
	* 2020-11-09 21:48:49.529545 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (2.000121745s) to execute
	* WARNING: 2020/11/09 21:48:49 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:48:49.649874 W | wal: sync duration of 5.963663801s, expected less than 1s
	* 2020-11-09 21:48:51.411280 W | wal: sync duration of 1.76121557s, expected less than 1s
	* 2020-11-09 21:48:52.695899 W | etcdserver: failed to revoke 6c2b75aefa2c1701 ("etcdserver: request timed out")
	* 2020-11-09 21:48:54.577159 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	* 2020-11-09 21:48:56.692649 W | etcdserver: request "header:<ID:17017825011350935540 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" mod_revision:460 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" value_size:611 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" > >>" with result "size:16" took too long (7.0424289s) to execute
	* 2020-11-09 21:48:56.693254 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-f9fd979d6-7gxsc\" " with result "range_response_count:1 size:4826" took too long (10.664308788s) to execute
	* 2020-11-09 21:48:59.360981 W | wal: sync duration of 5.568261579s, expected less than 1s
	* 2020-11-09 21:48:59.367852 W | etcdserver: failed to apply request "header:<ID:17017825011350935545 > lease_revoke:<id:6c2b75aefa2c1701>" with response "size:29" took (6.576412ms) to execute, err is lease not found
	* 2020-11-09 21:48:59.368464 W | etcdserver: failed to revoke 6c2b75aefa2c1701 ("lease not found")
	* 2020-11-09 21:48:59.370203 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (10.883156307s) to execute
	* 2020-11-09 21:48:59.370485 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/expand-controller\" " with result "range_response_count:1 size:248" took too long (13.334910036s) to execute
	* 2020-11-09 21:48:59.382099 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.852994849s) to execute
	* 2020-11-09 21:48:59.382564 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" " with result "range_response_count:1 size:700" took too long (2.682173737s) to execute
	* 2020-11-09 21:49:03.576953 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:49:13.577519 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> etcd [acf3b4e626de] <==
	* 2020-11-09 21:47:24.983913 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (1.392575696s) to execute
	* 2020-11-09 21:47:28.561999 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:47:30.447060 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	* 2020-11-09 21:47:32.779226 W | wal: sync duration of 4.183418577s, expected less than 1s
	* 2020-11-09 21:47:36.019194 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.999976953s) to execute
	* WARNING: 2020/11/09 21:47:36 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:47:36.375916 W | wal: sync duration of 3.588582222s, expected less than 1s
	* 2020-11-09 21:47:36.376003 W | etcdserver: read-only range request "key:\"/registry/clusterroles/admin\" " with result "range_response_count:1 size:3325" took too long (7.717089569s) to execute
	* 2020-11-09 21:47:36.376245 W | etcdserver: request "header:<ID:17017825011331503654 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-node-lease/default\" mod_revision:324 > success:<request_put:<key:\"/registry/serviceaccounts/kube-node-lease/default\" value_size:153 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-node-lease/default\" > >>" with result "size:16" took too long (3.596533742s) to execute
	* 2020-11-09 21:47:36.382476 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:209" took too long (7.425218785s) to execute
	* 2020-11-09 21:47:36.383144 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-embed-certs-20201109134632-342799\" " with result "range_response_count:1 size:6056" took too long (3.963759962s) to execute
	* 2020-11-09 21:47:36.383666 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (6.28425769s) to execute
	* 2020-11-09 21:47:36.384019 W | etcdserver: read-only range request "key:\"/registry/minions/embed-certs-20201109134632-342799\" " with result "range_response_count:1 size:5369" took too long (7.260060034s) to execute
	* 2020-11-09 21:47:39.406440 W | wal: sync duration of 3.001662735s, expected less than 1s
	* 2020-11-09 21:47:39.408836 W | etcdserver: read-only range request "key:\"/registry/secrets/kube-node-lease/default-token-v7dj8\" " with result "range_response_count:1 size:2713" took too long (3.743879999s) to execute
	* 2020-11-09 21:47:39.414118 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions/kube-system/kube-proxy-744c595cb\" " with result "range_response_count:1 size:2286" took too long (3.029434988s) to execute
	* 2020-11-09 21:47:39.417400 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (2.615649154s) to execute
	* 2020-11-09 21:47:39.417911 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:610" took too long (3.022808797s) to execute
	* 2020-11-09 21:47:39.418244 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3479" took too long (3.024365648s) to execute
	* 2020-11-09 21:47:39.455131 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:47:49.447660 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:47:54.784313 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2170" took too long (205.767195ms) to execute
	* 2020-11-09 21:47:58.091336 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/09 21:47:58 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 2020-11-09 21:47:58.167416 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> kernel <==
	*  21:49:21 up  1:31,  0 users,  load average: 6.05, 9.73, 8.20
	* Linux embed-certs-20201109134632-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [5f47fadca78c] <==
	* W1109 21:48:07.451036       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.452145       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.520644       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.539814       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.635005       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.656323       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.682788       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.720591       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.733568       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.747490       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.751040       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.772622       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.775952       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.789969       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.800307       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.801544       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.819095       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.851023       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.855278       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.904057       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:07.936149       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:08.002119       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:08.074615       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:08.163079       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:48:08.174808       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kube-apiserver [9cf56787377b] <==
	* I1109 21:48:59.371399       1 trace.go:205] Trace[1733382738]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/expand-controller,user-agent:kube-controller-manager/v1.19.2 (linux/amd64) kubernetes/f574309/kube-controller-manager,client:192.168.82.16 (09-Nov-2020 21:48:46.034) (total time: 13336ms):
	* Trace[1733382738]: ---"About to write a response" 13336ms (21:48:00.371)
	* Trace[1733382738]: [13.336453324s] [13.336453324s] END
	* I1109 21:48:59.371758       1 trace.go:205] Trace[804655931]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.19.2 (linux/amd64) kubernetes/f574309,client:192.168.82.16 (09-Nov-2020 21:48:53.032) (total time: 6338ms):
	* Trace[804655931]: ---"Object stored in database" 6338ms (21:48:00.371)
	* Trace[804655931]: [6.338782316s] [6.338782316s] END
	* I1109 21:48:59.371400       1 trace.go:205] Trace[726476451]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.19.2 (linux/amd64) kubernetes/f574309,client:127.0.0.1 (09-Nov-2020 21:48:48.486) (total time: 10884ms):
	* Trace[726476451]: ---"About to write a response" 10884ms (21:48:00.371)
	* Trace[726476451]: [10.884852756s] [10.884852756s] END
	* I1109 21:48:59.374273       1 trace.go:205] Trace[1191387262]: "GuaranteedUpdate etcd3" type:*core.Pod (09-Nov-2020 21:48:56.697) (total time: 2676ms):
	* Trace[1191387262]: ---"Transaction committed" 2673ms (21:48:00.374)
	* Trace[1191387262]: [2.676457365s] [2.676457365s] END
	* I1109 21:48:59.374643       1 trace.go:205] Trace[574079011]: "Patch" url:/api/v1/namespaces/kube-system/pods/coredns-f9fd979d6-7gxsc/status,user-agent:kubelet/v1.19.2 (linux/amd64) kubernetes/f574309,client:192.168.82.16 (09-Nov-2020 21:48:56.697) (total time: 2677ms):
	* Trace[574079011]: ---"Object stored in database" 2674ms (21:48:00.374)
	* Trace[574079011]: [2.677033864s] [2.677033864s] END
	* I1109 21:48:59.386693       1 trace.go:205] Trace[2076754026]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-20201109134632-342799,user-agent:kubelet/v1.19.2 (linux/amd64) kubernetes/f574309,client:192.168.82.16 (09-Nov-2020 21:48:56.699) (total time: 2686ms):
	* Trace[2076754026]: ---"About to write a response" 2686ms (21:48:00.386)
	* Trace[2076754026]: [2.686926545s] [2.686926545s] END
	* I1109 21:49:00.126617       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	* I1109 21:49:00.164245       1 controller.go:606] quota admission added evaluator for: endpoints
	* I1109 21:49:00.337741       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	* I1109 21:49:00.475577       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	* I1109 21:49:08.605706       1 client.go:360] parsed scheme: "passthrough"
	* I1109 21:49:08.605772       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I1109 21:49:08.605796       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [1ac183bbf19f] <==
	* I1109 21:49:00.166290       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	* I1109 21:49:00.174492       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	* I1109 21:49:00.175312       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	* I1109 21:49:00.181042       1 shared_informer.go:247] Caches are synced for GC 
	* I1109 21:49:00.274976       1 shared_informer.go:247] Caches are synced for attach detach 
	* I1109 21:49:00.326637       1 shared_informer.go:247] Caches are synced for taint 
	* I1109 21:49:00.326787       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* I1109 21:49:00.326813       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* W1109 21:49:00.326872       1 node_lifecycle_controller.go:1044] Missing timestamp for Node embed-certs-20201109134632-342799. Assuming now as a timestamp.
	* I1109 21:49:00.326912       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I1109 21:49:00.326972       1 event.go:291] "Event occurred" object="embed-certs-20201109134632-342799" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20201109134632-342799 event: Registered Node embed-certs-20201109134632-342799 in Controller"
	* I1109 21:49:00.333990       1 shared_informer.go:247] Caches are synced for deployment 
	* I1109 21:49:00.344895       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-584f46694c to 1"
	* I1109 21:49:00.345583       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-c95fcf479 to 1"
	* I1109 21:49:00.372496       1 shared_informer.go:247] Caches are synced for disruption 
	* I1109 21:49:00.372534       1 disruption.go:339] Sending events to api server.
	* I1109 21:49:00.384108       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:49:00.418432       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	* I1109 21:49:00.427998       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:49:00.451625       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c95fcf479-zw8m7"
	* I1109 21:49:00.452399       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-584f46694c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-584f46694c-nqgzr"
	* I1109 21:49:00.461970       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:49:00.762257       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:49:00.768472       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:49:00.768504       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* 
	* ==> kube-controller-manager [571aac763dbf] <==
	* I1109 21:47:28.790518       1 shared_informer.go:247] Caches are synced for expand 
	* I1109 21:47:28.790743       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I1109 21:47:28.795957       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:47:29.096238       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:47:29.122176       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:47:29.122215       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:47:29.440324       1 request.go:645] Throttling request took 1.048651161s, request: GET:https://192.168.82.16:8443/apis/events.k8s.io/v1?timeout=32s
	* I1109 21:47:30.142759       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	* I1109 21:47:30.142815       1 shared_informer.go:247] Caches are synced for resource quota 
	* E1109 21:47:35.663234       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"embed-certs-20201109134632-342799.1645f534a389ce8a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"embed-certs-20201109134632-342799", UID:"7d77e4ed-7aec-495f-a84f-f2914a791bd5", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"RegisteredNode", Message:"Node embed-certs-20201109134632-342799 event: Registered Node embed-certs-20201109134632-342799 in Controller", Source:v
1.EventSource{Component:"node-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfe28c1c273c6e8a, ext:16962091427, loc:(*time.Location)(0x6a59c80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfe28c1c273c6e8a, ext:16962091427, loc:(*time.Location)(0x6a59c80)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
	* E1109 21:47:35.668384       1 daemon_controller.go:320] kube-system/kube-proxy failed with : failed to construct revisions of DaemonSet: etcdserver: request timed out
	* E1109 21:47:35.670529       1 controller_utils.go:231] unable to update labels map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux] for Node "embed-certs-20201109134632-342799": failed to patch the node: etcdserver: request timed out
	* E1109 21:47:35.670869       1 node_lifecycle_controller.go:607] Failed to reconcile labels for node <embed-certs-20201109134632-342799>, requeue it: failed update labels for node &Node{ObjectMeta:{embed-certs-20201109134632-342799   /api/v1/nodes/embed-certs-20201109134632-342799 7d77e4ed-7aec-495f-a84f-f2914a791bd5 249 0 2020-11-09 21:47:18 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:embed-certs-20201109134632-342799 kubernetes.io/os:linux minikube.k8s.io/commit:21ac2a6a37964be4739a8be2fb5a50a8d224597d minikube.k8s.io/name:embed-certs-20201109134632-342799 minikube.k8s.io/updated_at:2020_11_09T13_47_21_0700 minikube.k8s.io/version:v1.14.2 node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-11-09 21:47:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kub
ernetes.io/master":{}}}}} {kubectl-label Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:minikube.k8s.io/commit":{},"f:minikube.k8s.io/name":{},"f:minikube.k8s.io/updated_at":{},"f:minikube.k8s.io/version":{}}}}} {kubelet Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTran
sitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus
{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,
Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:22 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.82.16,},NodeAddress{Type:Hostname,Address:embed-certs-20201109134632-342799,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d4c0807a7121464481bb577d74022f0b,SystemUUID:23a0b8f9-e92b-4fd4-ae6c-7935ad2706cf,BootID:9ad1ab50-5be9-48e2-8ae1-dc31113bc120,KernelVersion:4.9.0-14-amd64,OSImage:Ubuntu 20.04.1 LTS,ContainerRuntimeVersion:docker://19.3.13,KubeletVersion:v1.19.2,KubeProxyVersion:v1.19.2,Oper
atingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard@sha256:45ef224759bc50c84445f233fffae4aa3bdaec705cb5ee4bfe36d183b270b45d kubernetesui/dashboard:v2.0.3],SizeBytes:224634157,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:fc905eab708c6abbdf0ef0d47667592b948fea3adf31d71b19b5205340d00011 k8s.gcr.io/kube-apiserver:v1.19.2],SizeBytes:118778218,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:fa7c9d19680704e246873eb600c02fa95167d5c58e56d56ba9ed30b7c4150ac1 k8s.gcr.io/kube-proxy:v1.19.2],SizeBytes:117686573,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:c94b98d9f79bdfe33010c313891d99ed50858d6f04ceef865e7904c338dad913 k8s.gcr.io/kube-controller-manager:v1.19.2],SizeBytes:110778730,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb058c7394fad4d968d366b8b372698a1144a
1c3c6de52cdf46ff050ccfd31ff k8s.gcr.io/kube-scheduler:v1.19.2],SizeBytes:45656426,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf kubernetesui/metrics-scraper:v1.0.4],SizeBytes:36937728,},ContainerImage{Names:[gcr.io/k8s-minikube/storage-provisioner@sha256:5d8c9e69200846ff740bca872d681d2a736014386e4006fd26c4bf24ef7813ec gcr.io/k8s-minikube/storage-provisioner:v3],SizeBytes:29667328,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[minikube-local-cache-test:functional-20201109132758-342799],SizeBytes:30,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
	* E1109 21:47:35.683303       1 ttl_controller.go:220] etcdserver: request timed out
	* W1109 21:47:35.694645       1 endpointslice_controller.go:284] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: failed to create EndpointSlice for Service kube-system/kube-dns: etcdserver: request timed out
	* I1109 21:47:35.694823       1 event.go:291] "Event occurred" object="kube-system/kube-dns" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service kube-system/kube-dns: failed to create EndpointSlice for Service kube-system/kube-dns: etcdserver: request timed out"
	* E1109 21:47:36.381120       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	* I1109 21:47:36.386534       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Warning" reason="ReplicaSetCreateError" message="Failed to create new replica set \"coredns-f9fd979d6\": etcdserver: request timed out"
	* I1109 21:47:39.421022       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-7gxsc"
	* I1109 21:47:39.466610       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-hz9zv"
	* E1109 21:47:39.467168       1 tokens_controller.go:261] error synchronizing serviceaccount kube-node-lease/default: etcdserver: request timed out
	* I1109 21:47:39.498774       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8j529"
	* I1109 21:47:39.884325       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-f9fd979d6 to 1"
	* I1109 21:47:40.079782       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-f9fd979d6-hz9zv"
	* I1109 21:47:43.659662       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* 
	* ==> kube-proxy [0497c03cee18] <==
	* I1109 21:47:41.260238       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:47:41.260371       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:47:41.324964       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:47:41.325105       1 server_others.go:186] Using iptables Proxier.
	* W1109 21:47:41.325167       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1109 21:47:41.325208       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1109 21:47:41.325650       1 server.go:650] Version: v1.19.2
	* I1109 21:47:41.326341       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:47:41.326579       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:47:41.326675       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:47:41.327004       1 config.go:315] Starting service config controller
	* I1109 21:47:41.327032       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:47:41.327075       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:47:41.327067       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:47:41.427318       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I1109 21:47:41.427320       1 shared_informer.go:247] Caches are synced for service config 
	* 
	* ==> kube-proxy [e73b51e98fd2] <==
	* I1109 21:48:39.059346       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:48:39.059834       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:48:39.182518       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:48:39.182640       1 server_others.go:186] Using iptables Proxier.
	* W1109 21:48:39.182655       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1109 21:48:39.182661       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1109 21:48:39.182975       1 server.go:650] Version: v1.19.2
	* I1109 21:48:39.183647       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:48:39.183813       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:48:39.183944       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:48:39.184145       1 config.go:315] Starting service config controller
	* I1109 21:48:39.184168       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:48:39.184217       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:48:39.184230       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:48:39.284489       1 shared_informer.go:247] Caches are synced for service config 
	* I1109 21:48:39.284556       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-scheduler [1e430985e895] <==
	* I1109 21:47:18.371790       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:47:18.371812       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:47:18.371898       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E1109 21:47:18.380092       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:47:18.380505       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:47:18.380960       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:47:18.381517       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:18.382034       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:47:18.382265       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:47:18.382384       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:47:18.382461       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:47:18.382540       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:47:18.382708       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:47:18.382796       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:47:18.382937       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:47:18.382967       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:19.265518       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:47:19.363478       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:47:19.377101       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:19.377447       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:47:19.424091       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:47:19.507148       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:19.666023       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:47:19.683829       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* I1109 21:47:22.372226       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kube-scheduler [c7a8df7cab25] <==
	* I1109 21:48:28.407698       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:48:28.407782       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:48:30.093572       1 serving.go:331] Generated self-signed cert in-memory
	* W1109 21:48:35.363283       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1109 21:48:35.363339       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W1109 21:48:35.363352       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1109 21:48:35.363360       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1109 21:48:35.475503       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:48:35.475538       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:48:35.479872       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:48:35.479933       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:48:35.480498       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1109 21:48:35.481568       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1109 21:48:35.581291       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:48:11 UTC, end at Mon 2020-11-09 21:49:22 UTC. --
	* Nov 09 21:48:52 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:52.489378    1190 controller.go:178] failed to update node lease, error: etcdserver: request timed out
	* Nov 09 21:48:53 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:53.031285    1190 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-embed-certs-20201109134632-342799.1645f54240183408", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-embed-certs-20201109134632-342799", UID:"9447b7084db297c9714ad3f013f933da", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"SandboxChanged", Message:"Pod sandbox cha
nged, it will be killed and re-created.", Source:v1.EventSource{Component:"kubelet", Host:"embed-certs-20201109134632-342799"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfe28c2ac71e4608, ext:1509486595, loc:(*time.Location)(0x6cf5c60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfe28c2ac71e4608, ext:1509486595, loc:(*time.Location)(0x6cf5c60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
	* Nov 09 21:48:56 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:56.699075    1190 controller.go:178] failed to update node lease, error: Operation cannot be fulfilled on leases.coordination.k8s.io "embed-certs-20201109134632-342799": the object has been modified; please apply your changes to the latest version and try again
	* Nov 09 21:48:58 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:58.097199    1190 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:48:58 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:48:58.097246    1190 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.480143    1190 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.520244    1190 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.628898    1190 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/10004477-735a-4d58-95eb-eb8d770a6035-tmp-volume") pod "dashboard-metrics-scraper-c95fcf479-zw8m7" (UID: "10004477-735a-4d58-95eb-eb8d770a6035")
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.628952    1190 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1f496dcb-22d1-4c9f-90bb-2b9130632b61-tmp-volume") pod "kubernetes-dashboard-584f46694c-nqgzr" (UID: "1f496dcb-22d1-4c9f-90bb-2b9130632b61")
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.628993    1190 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-5mfvs" (UniqueName: "kubernetes.io/secret/1f496dcb-22d1-4c9f-90bb-2b9130632b61-kubernetes-dashboard-token-5mfvs") pod "kubernetes-dashboard-584f46694c-nqgzr" (UID: "1f496dcb-22d1-4c9f-90bb-2b9130632b61")
	* Nov 09 21:49:00 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:00.629067    1190 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-5mfvs" (UniqueName: "kubernetes.io/secret/10004477-735a-4d58-95eb-eb8d770a6035-kubernetes-dashboard-token-5mfvs") pod "dashboard-metrics-scraper-c95fcf479-zw8m7" (UID: "10004477-735a-4d58-95eb-eb8d770a6035")
	* Nov 09 21:49:01 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:01.519083    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-zw8m7 through plugin: invalid network status for
	* Nov 09 21:49:01 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:01.597676    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-nqgzr through plugin: invalid network status for
	* Nov 09 21:49:01 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:01.794233    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-zw8m7 through plugin: invalid network status for
	* Nov 09 21:49:01 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:01.844561    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-nqgzr through plugin: invalid network status for
	* Nov 09 21:49:02 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:02.909450    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-zw8m7 through plugin: invalid network status for
	* Nov 09 21:49:02 embed-certs-20201109134632-342799 kubelet[1190]: W1109 21:49:02.919615    1190 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-nqgzr through plugin: invalid network status for
	* Nov 09 21:49:08 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:49:08.112573    1190 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:49:08 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:49:08.112643    1190 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:49:10 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:10.013114    1190 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 572d2f66874bf142a9732605494fd31cdca81bff7e3821991815dbb5643a24bb
	* Nov 09 21:49:10 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:10.013600    1190 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 824d17be8ab7f33eba6f1605b8a0fc8992d3f3f4ea59f7ac4283d68ea500e798
	* Nov 09 21:49:10 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:49:10.013964    1190 pod_workers.go:191] Error syncing pod f8f2dc83-2275-4184-b931-8767527fb343 ("storage-provisioner_kube-system(f8f2dc83-2275-4184-b931-8767527fb343)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f8f2dc83-2275-4184-b931-8767527fb343)"
	* Nov 09 21:49:18 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:49:18.126054    1190 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:49:18 embed-certs-20201109134632-342799 kubelet[1190]: E1109 21:49:18.126092    1190 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:49:21 embed-certs-20201109134632-342799 kubelet[1190]: I1109 21:49:21.375445    1190 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 824d17be8ab7f33eba6f1605b8a0fc8992d3f3f4ea59f7ac4283d68ea500e798
	* 
	* ==> kubernetes-dashboard [e8b8095c5dce] <==
	* 2020/11/09 21:49:01 Starting overwatch
	* 2020/11/09 21:49:01 Using namespace: kubernetes-dashboard
	* 2020/11/09 21:49:01 Using in-cluster config to connect to apiserver
	* 2020/11/09 21:49:01 Using secret token for csrf signing
	* 2020/11/09 21:49:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	* 2020/11/09 21:49:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	* 2020/11/09 21:49:01 Successful initial request to the apiserver, version: v1.19.2
	* 2020/11/09 21:49:01 Generating JWE encryption key
	* 2020/11/09 21:49:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	* 2020/11/09 21:49:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	* 2020/11/09 21:49:02 Initializing JWE encryption key from synchronized object
	* 2020/11/09 21:49:02 Creating in-cluster Sidecar client
	* 2020/11/09 21:49:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	* 2020/11/09 21:49:02 Serving insecurely on HTTP port: 9090
	* 
	* ==> storage-provisioner [824d17be8ab7] <==
	* F1109 21:49:09.164907       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:49:21.381017  548790 out.go:286] unable to execute * 2020-11-09 21:48:46.022072 W | etcdserver: request "header:<ID:17017825011350935536 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:1282 >> failure:<>>" with result "size:16" took too long (2.335655319s) to execute
	: html/template:* 2020-11-09 21:48:46.022072 W | etcdserver: request "header:<ID:17017825011350935536 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:1282 >> failure:<>>" with result "size:16" took too long (2.335655319s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:49:21.415479  548790 out.go:286] unable to execute * 2020-11-09 21:48:56.692649 W | etcdserver: request "header:<ID:17017825011350935540 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" mod_revision:460 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" value_size:611 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" > >>" with result "size:16" took too long (7.0424289s) to execute
	: html/template:* 2020-11-09 21:48:56.692649 W | etcdserver: request "header:<ID:17017825011350935540 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" mod_revision:460 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" value_size:611 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-20201109134632-342799\" > >>" with result "size:16" took too long (7.0424289s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:49:21.567420  548790 out.go:286] unable to execute * 2020-11-09 21:47:36.376245 W | etcdserver: request "header:<ID:17017825011331503654 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-node-lease/default\" mod_revision:324 > success:<request_put:<key:\"/registry/serviceaccounts/kube-node-lease/default\" value_size:153 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-node-lease/default\" > >>" with result "size:16" took too long (3.596533742s) to execute
	: html/template:* 2020-11-09 21:47:36.376245 W | etcdserver: request "header:<ID:17017825011331503654 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-node-lease/default\" mod_revision:324 > success:<request_put:<key:\"/registry/serviceaccounts/kube-node-lease/default\" value_size:153 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-node-lease/default\" > >>" with result "size:16" took too long (3.596533742s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:49:22.163571  548790 out.go:281] unable to parse "* E1109 21:47:35.670869       1 node_lifecycle_controller.go:607] Failed to reconcile labels for node <embed-certs-20201109134632-342799>, requeue it: failed update labels for node &Node{ObjectMeta:{embed-certs-20201109134632-342799   /api/v1/nodes/embed-certs-20201109134632-342799 7d77e4ed-7aec-495f-a84f-f2914a791bd5 249 0 2020-11-09 21:47:18 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:embed-certs-20201109134632-342799 kubernetes.io/os:linux minikube.k8s.io/commit:21ac2a6a37964be4739a8be2fb5a50a8d224597d minikube.k8s.io/name:embed-certs-20201109134632-342799 minikube.k8s.io/updated_at:2020_11_09T13_47_21_0700 minikube.k8s.io/version:v1.14.2 node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-11-09 21:47:21 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\"f:kubeadm.
alpha.kubernetes.io/cri-socket\":{}},\"f:labels\":{\"f:node-role.kubernetes.io/master\":{}}}}} {kubectl-label Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\"f:minikube.k8s.io/commit\":{},\"f:minikube.k8s.io/name\":{},\"f:minikube.k8s.io/updated_at\":{},\"f:minikube.k8s.io/version\":{}}}}} {kubelet Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:volumes.kubernetes.io/controller-managed-attach-detach\":{}},\"f:labels\":{\".\":{},\"f:kubernetes.io/arch\":{},\"f:kubernetes.io/hostname\":{},\"f:kubernetes.io/os\":{}}},\"f:status\":{\"f:addresses\":{\".\":{},\"k:{\\\"type\\\":\\\"Hostname\\\"}\":{\".\":{},\"f:address\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"InternalIP\\\"}\":{\".\":{},\"f:address\":{},\"f:type\":{}}},\"f:allocatable\":{\".\":{},\"f:cpu\":{},\"f:ephemeral-storage\":{},\"f:hugepages-1Gi\":{},\"f:hugepages-2Mi\":{},\"f:memory\":{},\"f:pods\":{}},\"f:capacity\":{\".\":{},\"f:cpu\":{},\"f:ephemeral-storage\":{},\"f:hug
epages-1Gi\":{},\"f:hugepages-2Mi\":{},\"f:memory\":{},\"f:pods\":{}},\"f:conditions\":{\".\":{},\"k:{\\\"type\\\":\\\"DiskPressure\\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"MemoryPressure\\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"PIDPressure\\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}},\"f:daemonEndpoints\":{\"f:kubeletEndpoint\":{\"f:Port\":{}}},\"f:images\":{},\"f:nodeInfo\":{\"f:architecture\":{},\"f:bootID\":{},\"f:containerRuntimeVersion\":{},\"f:kernelVersion\":{},\"f:kubeProxyVersion\":{},\"f:kubeletVersion\":{},\"f:machin
eID\":{},\"f:operatingSystem\":{},\"f:osImage\":{},\"f:systemUUID\":{}}}}}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTran
sitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:22 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.82.16,},NodeAddress{Type:Hostname,Address:embed-certs-20201109134632-342799,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:
NodeSystemInfo{MachineID:d4c0807a7121464481bb577d74022f0b,SystemUUID:23a0b8f9-e92b-4fd4-ae6c-7935ad2706cf,BootID:9ad1ab50-5be9-48e2-8ae1-dc31113bc120,KernelVersion:4.9.0-14-amd64,OSImage:Ubuntu 20.04.1 LTS,ContainerRuntimeVersion:docker://19.3.13,KubeletVersion:v1.19.2,KubeProxyVersion:v1.19.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard@sha256:45ef224759bc50c84445f233fffae4aa3bdaec705cb5ee4bfe36d183b270b45d kubernetesui/dashboard:v2.0.3],SizeBytes:224634157,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:fc905eab708c6abbdf0ef0d47667592b948fea3adf31d71b19b5205340d00011 k8s.gcr.io/kube-apiserver:v1.19.2],SizeBytes:118778218,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:fa7c9d19680704e246873eb600c02fa95167d5c58e56d56ba9ed30b7c4150ac1 k8s.gcr.io/kube-proxy:v1.19.2],SizeBytes:
117686573,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:c94b98d9f79bdfe33010c313891d99ed50858d6f04ceef865e7904c338dad913 k8s.gcr.io/kube-controller-manager:v1.19.2],SizeBytes:110778730,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb058c7394fad4d968d366b8b372698a1144a1c3c6de52cdf46ff050ccfd31ff k8s.gcr.io/kube-scheduler:v1.19.2],SizeBytes:45656426,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf kubernetesui/metrics-scraper:v1.0.4],SizeBytes:36937728,},ContainerImage{Names:[gcr.io/k8s-minikube/storage-provisioner@sha256:5d8c9e69200846ff740bca872d681d2a736014386e4006fd26c4bf24ef7813ec gcr.io/k8s-minikube/storage-provisioner:v3],SizeBytes:29667328,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f5
48c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[minikube-local-cache-test:functional-20201109132758-342799],SizeBytes:30,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}\n": template: * E1109 21:47:35.670869       1 node_lifecycle_controller.go:607] Failed to reconcile labels for node <embed-certs-20201109134632-342799>, requeue it: failed update labels for node &Node{ObjectMeta:{embed-certs-20201109134632-342799   /api/v1/nodes/embed-certs-20201109134632-342799 7d77e4ed-7aec-495f-a84f-f2914a791bd5 249 0 2020-11-09 21:47:18 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:embed-certs-20201109134632-342799 kubernetes.io/os:linux minikube.k8s.io/commit:21ac2a6a37964be4739a8be2fb5a50a8d224597d minikube.k8s.io/name:embed-certs-20201109134632-342799 minikube.k8s.io/updated_at:2020_11_09T13_47_21_0700 minikube.k8s.io/version:v1.14.2 node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock volumes.kuber
netes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2020-11-09 21:47:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kubectl-label Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:minikube.k8s.io/commit":{},"f:minikube.k8s.io/name":{},"f:minikube.k8s.io/updated_at":{},"f:minikube.k8s.io/version":{}}}}} {kubelet Update v1 2020-11-09 21:47:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:c
apacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},S
pec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{528310767616 0} {<nil>} 515928484Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{31628288000 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Mes
sage:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2020-11-09 21:47:22 +0000 UTC,LastTransitionTime:2020-11-09 21:47:22 +0000 UTC,Reason:KubeletNotReady,Message:container runtime status check may not have completed yet,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.82.16,},NodeAddress{Type:Hostname,Address:embed-certs-20201109134632-342799,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d4c0807a7121464481bb577d74022f0b,SystemUUID:23a0b8f9-
e92b-4fd4-ae6c-7935ad2706cf,BootID:9ad1ab50-5be9-48e2-8ae1-dc31113bc120,KernelVersion:4.9.0-14-amd64,OSImage:Ubuntu 20.04.1 LTS,ContainerRuntimeVersion:docker://19.3.13,KubeletVersion:v1.19.2,KubeProxyVersion:v1.19.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard@sha256:45ef224759bc50c84445f233fffae4aa3bdaec705cb5ee4bfe36d183b270b45d kubernetesui/dashboard:v2.0.3],SizeBytes:224634157,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:fc905eab708c6abbdf0ef0d47667592b948fea3adf31d71b19b5205340d00011 k8s.gcr.io/kube-apiserver:v1.19.2],SizeBytes:118778218,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:fa7c9d19680704e246873eb600c02fa95167d5c58e56d56ba9ed30b7c4150ac1 k8s.gcr.io/kube-proxy:v1.19.2],SizeBytes:117686573,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:c9
4b98d9f79bdfe33010c313891d99ed50858d6f04ceef865e7904c338dad913 k8s.gcr.io/kube-controller-manager:v1.19.2],SizeBytes:110778730,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb058c7394fad4d968d366b8b372698a1144a1c3c6de52cdf46ff050ccfd31ff k8s.gcr.io/kube-scheduler:v1.19.2],SizeBytes:45656426,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf kubernetesui/metrics-scraper:v1.0.4],SizeBytes:36937728,},ContainerImage{Names:[gcr.io/k8s-minikube/storage-provisioner@sha256:5d8c9e69200846ff740bca872d681d2a736014386e4006fd26c4bf24ef7813ec gcr.io/k8s-minikube/storage-provisioner:v3],SizeBytes:29667328,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[minik
ube-local-cache-test:functional-20201109132758-342799],SizeBytes:30,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
	:1: unexpected "}" in operand - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799
helpers_test.go:255: (dbg) Run:  kubectl --context embed-certs-20201109134632-342799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context embed-certs-20201109134632-342799 describe pod 
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context embed-certs-20201109134632-342799 describe pod : exit status 1 (76.236609ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context embed-certs-20201109134632-342799 describe pod : exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (9.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (9.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20201109134552-342799 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201109132758-342799
start_stop_delete_test.go:232: v1.13.0 images mismatch (-want +got):
[]string{
- 	"docker.io/kubernetesui/dashboard:v2.0.3",
- 	"docker.io/kubernetesui/metrics-scraper:v1.0.4",
	"gcr.io/k8s-minikube/storage-provisioner:v3",
	"k8s.gcr.io/coredns:1.2.6",
	... // 4 identical elements
	"k8s.gcr.io/kube-scheduler:v1.13.0",
	"k8s.gcr.io/pause:3.1",
+ 	"kubernetesui/dashboard:v2.0.3",
+ 	"kubernetesui/metrics-scraper:v1.0.4",
}
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect old-k8s-version-20201109134552-342799
helpers_test.go:229: (dbg) docker inspect old-k8s-version-20201109134552-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d",
	        "Created": "2020-11-09T21:45:55.430862775Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 540371,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:48:24.860311086Z",
	            "FinishedAt": "2020-11-09T21:48:22.797870491Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d/hostname",
	        "HostsPath": "/var/lib/docker/containers/cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d/hosts",
	        "LogPath": "/var/lib/docker/containers/cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d/cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d-json.log",
	        "Name": "/old-k8s-version-20201109134552-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20201109134552-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20201109134552-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc1e54b698a8405d0672b9789553d3b556ee40f2e8de5116a22175e5af119a97-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc1e54b698a8405d0672b9789553d3b556ee40f2e8de5116a22175e5af119a97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc1e54b698a8405d0672b9789553d3b556ee40f2e8de5116a22175e5af119a97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc1e54b698a8405d0672b9789553d3b556ee40f2e8de5116a22175e5af119a97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20201109134552-342799",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20201109134552-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20201109134552-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20201109134552-342799",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20201109134552-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "235d84e1fae33e1157c1cda4e2694b9c734de79c2775f5a18735ccf6c987e4c3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/235d84e1fae3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20201109134552-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cfd70b73dfb6"
	                    ],
	                    "NetworkID": "6a798602d87272d4795c7b386775550e088a3d5682b56de75a4efc87906b3f55",
	                    "EndpointID": "c078af9e3359035362d7ce4a7ac9e2f7db595874c1d5525280ef131798ff18d7",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799
helpers_test.go:238: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20201109134552-342799 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20201109134552-342799 logs -n 25: (3.509691579s)
helpers_test.go:246: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:48:25 UTC, end at Mon 2020-11-09 21:49:34 UTC. --
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 systemd[1]: docker.service: Succeeded.
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 systemd[1]: Stopped Docker Application Container Engine.
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 systemd[1]: Starting Docker Application Container Engine...
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.803323979Z" level=info msg="Starting up"
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.806203925Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.806242325Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.806274753Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.806288685Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.808144170Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.808181062Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.808297650Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.808326459Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.842328484Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Nov 09 21:48:37 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:37.851319750Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 09 21:48:37 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:37.851524034Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 09 21:48:37 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:37.851545075Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 09 21:48:37 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:37.851887630Z" level=info msg="Loading containers: start."
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.065436280Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.137374335Z" level=info msg="Loading containers: done."
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.166576619Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.166695679Z" level=info msg="Daemon has completed initialization"
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.189655428Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.189792220Z" level=info msg="API listen on [::]:2376"
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 systemd[1]: Started Docker Application Container Engine.
	* Nov 09 21:49:12 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:49:12.480441165Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                             CREATED              STATE               NAME                        ATTEMPT             POD ID
	* c38f07639a34a       503bc4b7440b9                                                                     14 seconds ago       Running             kubernetes-dashboard        0                   c1446a446bcab
	* 6b8f59db65677       86262685d9abb                                                                     14 seconds ago       Running             dashboard-metrics-scraper   0                   386cd3ccf8164
	* 3c102ac79db19       f59dcacceff45                                                                     23 seconds ago       Running             coredns                     1                   c505f78f24db9
	* 91e52c326bc0a       56cc512116c8f                                                                     23 seconds ago       Running             busybox                     1                   2ed15a87982ca
	* 6271ad6709bba       bad58561c4be7                                                                     24 seconds ago       Running             storage-provisioner         2                   6d49b57691d98
	* 2b37250db7949       8fa56d18961fa                                                                     24 seconds ago       Running             kube-proxy                  1                   8e5c3490bba84
	* 4c6596774f916       3cab8e1b9802c                                                                     35 seconds ago       Running             etcd                        1                   7e610759eb0bf
	* ea9697518d254       f1ff9b7e3d6e9                                                                     35 seconds ago       Running             kube-apiserver              1                   76445a009ce90
	* c3f4383e623fe       d82530ead066d                                                                     35 seconds ago       Running             kube-controller-manager     2                   3130f7c56ccb8
	* edd9fb6c5659e       9508b7d8008de                                                                     35 seconds ago       Running             kube-scheduler              1                   5c12e73e3f60a
	* 94ed168200448       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1   About a minute ago   Exited              busybox                     0                   0d08e9bc45fd2
	* b3aa2f145af59       bad58561c4be7                                                                     About a minute ago   Exited              storage-provisioner         1                   64d8642215328
	* 903dda65df79c       f59dcacceff45                                                                     2 minutes ago        Exited              coredns                     0                   a969b6abbde31
	* 48f5e104f87d5       8fa56d18961fa                                                                     2 minutes ago        Exited              kube-proxy                  0                   e7fb33fbd7cfb
	* 2e4cf189ac411       d82530ead066d                                                                     2 minutes ago        Exited              kube-controller-manager     1                   e7651a8560325
	* 14a3846ed127e       9508b7d8008de                                                                     3 minutes ago        Exited              kube-scheduler              0                   2eae0dc9b703d
	* 41e5fb2679168       3cab8e1b9802c                                                                     3 minutes ago        Exited              etcd                        0                   70fd368585e37
	* 7bbad7d127c23       f1ff9b7e3d6e9                                                                     3 minutes ago        Exited              kube-apiserver              0                   f72f54407d686
	* 
	* ==> coredns [3c102ac79db1] <==
	* .:53
	* 2020-11-09T21:49:18.064Z [INFO] CoreDNS-1.2.6
	* 2020-11-09T21:49:18.064Z [INFO] linux/amd64, go1.11.2, 756749c
	* CoreDNS-1.2.6
	* linux/amd64, go1.11.2, 756749c
	*  [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
	* 
	* ==> coredns [903dda65df79] <==
	* .:53
	* 2020-11-09T21:47:05.855Z [INFO] CoreDNS-1.2.6
	* 2020-11-09T21:47:05.855Z [INFO] linux/amd64, go1.11.2, 756749c
	* CoreDNS-1.2.6
	* linux/amd64, go1.11.2, 756749c
	*  [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
	* E1109 21:47:30.856125       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1109 21:47:30.856140       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1109 21:47:30.856232       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1109 21:48:12.094910       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.095262       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.095473       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.095739       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=436&timeoutSeconds=319&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:48:12.095791       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=57&timeoutSeconds=506&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:48:12.095840       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=222&timeoutSeconds=427&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	* [INFO] SIGTERM: Shutting down servers then terminating
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20201109134552-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/hostname=old-k8s-version-20201109134552-342799
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=old-k8s-version-20201109134552-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_46_51_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:46:35 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:49:35 +0000   Mon, 09 Nov 2020 21:46:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:49:35 +0000   Mon, 09 Nov 2020 21:46:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:49:35 +0000   Mon, 09 Nov 2020 21:46:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:49:35 +0000   Mon, 09 Nov 2020 21:46:22 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.59.16
	*   Hostname:    old-k8s-version-20201109134552-342799
	* Capacity:
	*  cpu:                8
	*  ephemeral-storage:  515928484Ki
	*  hugepages-1Gi:      0
	*  hugepages-2Mi:      0
	*  memory:             30887000Ki
	*  pods:               110
	* Allocatable:
	*  cpu:                8
	*  ephemeral-storage:  515928484Ki
	*  hugepages-1Gi:      0
	*  hugepages-2Mi:      0
	*  memory:             30887000Ki
	*  pods:               110
	* System Info:
	*  Machine ID:                 2a99baa5ff9243fda609f5b1d8a9cb24
	*  System UUID:                8b0e325c-05be-4325-a5e0-2af926fb115b
	*  Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*  Kernel Version:             4.9.0-14-amd64
	*  OS Image:                   Ubuntu 20.04.1 LTS
	*  Operating System:           linux
	*  Architecture:               amd64
	*  Container Runtime Version:  docker://19.3.13
	*  Kubelet Version:            v1.13.0
	*  Kube-Proxy Version:         v1.13.0
	* Non-terminated Pods:         (10 in total)
	*   Namespace                  Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                  ----                                                             ------------  ----------  ---------------  -------------  ---
	*   default                    busybox                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	*   kube-system                coredns-86c58d9df4-dgjln                                         100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m37s
	*   kube-system                etcd-old-k8s-version-20201109134552-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	*   kube-system                kube-apiserver-old-k8s-version-20201109134552-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	*   kube-system                kube-controller-manager-old-k8s-version-20201109134552-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m47s
	*   kube-system                kube-proxy-gmlqw                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	*   kube-system                kube-scheduler-old-k8s-version-20201109134552-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         104s
	*   kube-system                storage-provisioner                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	*   kubernetes-dashboard       dashboard-metrics-scraper-7fc7ffbd75-kkdqv                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	*   kubernetes-dashboard       kubernetes-dashboard-66766c77dc-f24f5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                650m (8%)  0 (0%)
	*   memory             70Mi (0%)  170Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                    From                                               Message
	*   ----    ------                   ----                   ----                                               -------
	*   Normal  NodeHasSufficientMemory  3m15s (x8 over 3m16s)  kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    3m15s (x7 over 3m16s)  kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     3m15s (x8 over 3m16s)  kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 2m34s                  kube-proxy, old-k8s-version-20201109134552-342799  Starting kube-proxy.
	*   Normal  Starting                 51s                    kubelet, old-k8s-version-20201109134552-342799     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  51s                    kubelet, old-k8s-version-20201109134552-342799     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientPID     48s (x7 over 51s)      kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeHasSufficientMemory  46s (x8 over 51s)      kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    46s (x8 over 51s)      kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasNoDiskPressure
	*   Normal  Starting                 22s                    kube-proxy, old-k8s-version-20201109134552-342799  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.110709] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +27.722706] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:40] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:41] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +4.475323] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:42] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +28.080349] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:44] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +34.380797] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +1.362576] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:45] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +8.618364] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:46] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +10.217485] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:48] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +7.704784] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	* [  +0.000035] ll header: 00000000: ff ff ff ff ff ff 0e ae 30 cf cc 1c 08 06        ........0.....
	* [  +0.000006] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	* [  +0.000001] ll header: 00000000: ff ff ff ff ff ff 0e ae 30 cf cc 1c 08 06        ........0.....
	* [  +6.373099] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +9.034581] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth9dbc4a3c
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 95 d6 6a 72 32 08 06        ......B..jr2..
	* [Nov 9 21:49] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth81f8fb18
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a a8 9e 9e b5 6a 08 06        ......:....j..
	* [ +11.663987] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [41e5fb267916] <==
	* 2020-11-09 21:47:39.408107 W | etcdserver: read-only range request "key:\"/registry/secrets/kube-system/storage-provisioner-token-fs6q7\" " with result "range_response_count:1 size:2451" took too long (1.391354725s) to execute
	* 2020-11-09 21:47:39.408412 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/coredns\" " with result "range_response_count:1 size:217" took too long (1.392220711s) to execute
	* 2020-11-09 21:47:39.408636 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1.\" " with result "range_response_count:1 size:579" took too long (1.393558386s) to execute
	* 2020-11-09 21:47:39.409468 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.certificates.k8s.io\" " with result "range_response_count:1 size:662" took too long (1.394392291s) to execute
	* 2020-11-09 21:47:39.409823 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.storage.k8s.io\" " with result "range_response_count:1 size:647" took too long (1.394819448s) to execute
	* 2020-11-09 21:47:39.410036 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.apiextensions.k8s.io\" " with result "range_response_count:1 size:665" took too long (1.395071963s) to execute
	* 2020-11-09 21:47:39.410192 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/cronjob-controller\" " with result "range_response_count:1 size:251" took too long (1.395490809s) to execute
	* 2020-11-09 21:47:39.410247 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" " with result "range_response_count:1 size:272" took too long (1.395417143s) to execute
	* 2020-11-09 21:47:39.410356 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/coredns\" " with result "range_response_count:1 size:217" took too long (1.395527521s) to execute
	* 2020-11-09 21:47:39.410427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/coredns\" " with result "range_response_count:1 size:217" took too long (1.395696613s) to execute
	* 2020-11-09 21:47:39.410515 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1.apps\" " with result "range_response_count:1 size:603" took too long (1.395567066s) to execute
	* 2020-11-09 21:47:49.558815 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-20201109134552-342799\" " with result "range_response_count:1 size:3096" took too long (239.801958ms) to execute
	* 2020-11-09 21:47:54.779990 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (274.412544ms) to execute
	* 2020-11-09 21:48:01.416318 W | wal: sync duration of 2.10045279s, expected less than 1s
	* 2020-11-09 21:48:01.430860 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:767" took too long (2.005294739s) to execute
	* 2020-11-09 21:48:01.441664 W | etcdserver: read-only range request "key:\"/registry/secrets/kube-system/pod-garbage-collector-token-kxs7r\" " with result "range_response_count:1 size:2465" took too long (1.977680015s) to execute
	* 2020-11-09 21:48:01.441752 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:170" took too long (1.868179144s) to execute
	* 2020-11-09 21:48:01.442508 W | etcdserver: read-only range request "key:\"/registry/secrets/kube-system/cronjob-controller-token-lds96\" " with result "range_response_count:1 size:2444" took too long (1.867677045s) to execute
	* 2020-11-09 21:48:12.269849 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/09 21:48:12 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: getsockopt: connection refused"; Reconnecting to {127.0.0.1:2379 0  <nil>}
	* WARNING: 2020/11/09 21:48:12 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 192.168.59.16:2379: getsockopt: connection refused"; Reconnecting to {192.168.59.16:2379 0  <nil>}
	* 2020-11-09 21:48:13.270391 I | etcdserver: skipped leadership transfer for single member cluster
	* WARNING: 2020/11/09 21:48:13 grpc: addrConn.transportMonitor exits due to: context canceled
	* WARNING: 2020/11/09 21:48:13 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: operation was canceled"; Reconnecting to {127.0.0.1:2379 0  <nil>}
	* WARNING: 2020/11/09 21:48:13 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
	* 
	* ==> etcd [4c6596774f91] <==
	* 2020-11-09 21:49:00.986368 I | etcdserver: data dir = /var/lib/minikube/etcd
	* 2020-11-09 21:49:00.986374 I | etcdserver: member dir = /var/lib/minikube/etcd/member
	* 2020-11-09 21:49:00.986379 I | etcdserver: heartbeat = 100ms
	* 2020-11-09 21:49:00.986384 I | etcdserver: election = 1000ms
	* 2020-11-09 21:49:00.986388 I | etcdserver: snapshot count = 10000
	* 2020-11-09 21:49:00.986406 I | etcdserver: advertise client URLs = https://192.168.59.16:2379
	* 2020-11-09 21:49:01.059521 I | etcdserver: restarting member 47984c33979a6f91 in cluster 79741b01b410835d at commit index 473
	* 2020-11-09 21:49:01.060749 I | raft: 47984c33979a6f91 became follower at term 2
	* 2020-11-09 21:49:01.060806 I | raft: newRaft 47984c33979a6f91 [peers: [], term: 2, commit: 473, applied: 0, lastindex: 473, lastterm: 2]
	* 2020-11-09 21:49:01.159975 W | auth: simple token is not cryptographically signed
	* 2020-11-09 21:49:01.172322 I | etcdserver: starting server... [version: 3.2.24, cluster version: to_be_decided]
	* 2020-11-09 21:49:01.173282 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true
	* 2020-11-09 21:49:01.192874 I | etcdserver/membership: added member 47984c33979a6f91 [https://192.168.59.16:2380] to cluster 79741b01b410835d
	* 2020-11-09 21:49:01.193084 N | etcdserver/membership: set the initial cluster version to 3.2
	* 2020-11-09 21:49:01.193828 I | etcdserver/api: enabled capabilities for version 3.2
	* 2020-11-09 21:49:02.861410 I | raft: 47984c33979a6f91 is starting a new election at term 2
	* 2020-11-09 21:49:02.861522 I | raft: 47984c33979a6f91 became candidate at term 3
	* 2020-11-09 21:49:02.861545 I | raft: 47984c33979a6f91 received MsgVoteResp from 47984c33979a6f91 at term 3
	* 2020-11-09 21:49:02.861560 I | raft: 47984c33979a6f91 became leader at term 3
	* 2020-11-09 21:49:02.861567 I | raft: raft.node: 47984c33979a6f91 elected leader 47984c33979a6f91 at term 3
	* 2020-11-09 21:49:02.861996 I | etcdserver: published {Name:old-k8s-version-20201109134552-342799 ClientURLs:[https://192.168.59.16:2379]} to cluster 79741b01b410835d
	* 2020-11-09 21:49:02.862048 I | embed: ready to serve client requests
	* 2020-11-09 21:49:02.862139 I | embed: ready to serve client requests
	* 2020-11-09 21:49:02.862414 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:49:02.862463 I | embed: serving client requests on 192.168.59.16:2379
	* 
	* ==> kernel <==
	*  21:49:36 up  1:32,  0 users,  load average: 6.54, 9.64, 8.20
	* Linux old-k8s-version-20201109134552-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [7bbad7d127c2] <==
	* E1109 21:47:39.450330       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	* I1109 21:48:01.431867       1 trace.go:76] Trace[1321433626]: "Get /api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" (started: 2020-11-09 21:47:59.424854137 +0000 UTC m=+98.336861532) (total time: 2.006968771s):
	* Trace[1321433626]: [2.006841504s] [2.006837227s] About to write a response
	* I1109 21:48:01.434593       1 trace.go:76] Trace[1445030283]: "GuaranteedUpdate etcd3: *core.Node" (started: 2020-11-09 21:47:59.608062156 +0000 UTC m=+98.520069562) (total time: 1.82649011s):
	* Trace[1445030283]: [1.826413605s] [1.825380486s] Transaction committed
	* I1109 21:48:01.434740       1 trace.go:76] Trace[356942102]: "Patch /api/v1/nodes/old-k8s-version-20201109134552-342799/status" (started: 2020-11-09 21:47:59.6079159 +0000 UTC m=+98.519923302) (total time: 1.826804652s):
	* Trace[356942102]: [1.826714919s] [1.825914808s] Object stored in database
	* I1109 21:48:01.442829       1 trace.go:76] Trace[1452097955]: "Get /api/v1/namespaces/default" (started: 2020-11-09 21:47:59.572808654 +0000 UTC m=+98.484816049) (total time: 1.869976221s):
	* Trace[1452097955]: [1.869907179s] [1.869902961s] About to write a response
	* I1109 21:48:01.444153       1 trace.go:76] Trace[1325082032]: "Get /api/v1/namespaces/kube-system/secrets/pod-garbage-collector-token-kxs7r" (started: 2020-11-09 21:47:59.463397511 +0000 UTC m=+98.375404918) (total time: 1.980723642s):
	* Trace[1325082032]: [1.980663103s] [1.980657375s] About to write a response
	* I1109 21:48:01.444422       1 trace.go:76] Trace[999934736]: "Get /api/v1/namespaces/kube-system/secrets/cronjob-controller-token-lds96" (started: 2020-11-09 21:47:59.573623932 +0000 UTC m=+98.485631340) (total time: 1.870777967s):
	* Trace[999934736]: [1.870739068s] [1.870735562s] About to write a response
	* I1109 21:48:12.090966       1 controller.go:170] Shutting down kubernetes service endpoint reconciler
	* I1109 21:48:12.091696       1 autoregister_controller.go:160] Shutting down autoregister controller
	* I1109 21:48:12.091861       1 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
	* I1109 21:48:12.091895       1 naming_controller.go:295] Shutting down NamingConditionController
	* I1109 21:48:12.091912       1 customresource_discovery_controller.go:214] Shutting down DiscoveryController
	* I1109 21:48:12.091929       1 available_controller.go:295] Shutting down AvailableConditionController
	* I1109 21:48:12.091944       1 establishing_controller.go:84] Shutting down EstablishingController
	* I1109 21:48:12.091965       1 crd_finalizer.go:254] Shutting down CRDFinalizer
	* I1109 21:48:12.092197       1 crdregistration_controller.go:143] Shutting down crd-autoregister controller
	* I1109 21:48:12.092587       1 controller.go:90] Shutting down OpenAPI AggregationController
	* I1109 21:48:12.095774       1 secure_serving.go:156] Stopped listening on [::]:8443
	* E1109 21:48:12.099665       1 controller.go:172] Get https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:8443: connect: connection refused
	* 
	* ==> kube-apiserver [ea9697518d25] <==
	* I1109 21:49:10.408130       1 establishing_controller.go:73] Starting EstablishingController
	* I1109 21:49:10.557801       1 controller_utils.go:1034] Caches are synced for crd-autoregister controller
	* I1109 21:49:10.558403       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1109 21:49:10.558658       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1109 21:49:10.559240       1 cache.go:39] Caches are synced for autoregister controller
	* I1109 21:49:11.462315       1 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
	* I1109 21:49:14.579929       1 trace.go:76] Trace[1731639518]: "Create /api/v1/nodes" (started: 2020-11-09 21:49:10.560896859 +0000 UTC m=+9.640599358) (total time: 4.018928932s):
	* Trace[1731639518]: [4.010599196s] [4.001077536s] About to store object in database
	* I1109 21:49:14.580846       1 trace.go:76] Trace[135405931]: "Create /api/v1/namespaces/default/events" (started: 2020-11-09 21:49:10.460016707 +0000 UTC m=+9.539719178) (total time: 4.120757806s):
	* Trace[135405931]: [4.112259385s] [4.10942997s] About to store object in database
	* I1109 21:49:16.605238       1 trace.go:76] Trace[1046742352]: "Create /apis/authentication.k8s.io/v1/tokenreviews" (started: 2020-11-09 21:49:12.597932741 +0000 UTC m=+11.677635207) (total time: 4.007264932s):
	* Trace[1046742352]: [4.00151278s] [4.001278621s] About to store object in database
	* I1109 21:49:16.927165       1 trace.go:76] Trace[1395323660]: "Create /apis/rbac.authorization.k8s.io/v1/clusterroles" (started: 2020-11-09 21:49:12.922141616 +0000 UTC m=+12.001844101) (total time: 4.00496727s):
	* Trace[1395323660]: [4.001994572s] [4.001028301s] About to store object in database
	* I1109 21:49:16.943646       1 controller.go:608] quota admission added evaluator for: serviceaccounts
	* I1109 21:49:16.969347       1 controller.go:608] quota admission added evaluator for: deployments.apps
	* I1109 21:49:17.022152       1 controller.go:608] quota admission added evaluator for: daemonsets.apps
	* I1109 21:49:17.037697       1 controller.go:608] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I1109 21:49:17.044720       1 controller.go:608] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* I1109 21:49:17.214852       1 trace.go:76] Trace[2072403582]: "Create /api/v1/namespaces/default/events" (started: 2020-11-09 21:49:13.206353131 +0000 UTC m=+12.286055746) (total time: 4.008463842s):
	* Trace[2072403582]: [4.005072234s] [4.000992853s] About to store object in database
	* I1109 21:49:18.585691       1 trace.go:76] Trace[2102421044]: "Create /api/v1/namespaces/default/events" (started: 2020-11-09 21:49:14.582334996 +0000 UTC m=+13.662037470) (total time: 4.00331638s):
	* Trace[2102421044]: [4.001028525s] [4.00095562s] About to store object in database
	* I1109 21:49:20.094833       1 controller.go:608] quota admission added evaluator for: endpoints
	* I1109 21:49:20.166732       1 controller.go:608] quota admission added evaluator for: replicasets.apps
	* 
	* ==> kube-controller-manager [2e4cf189ac41] <==
	* E1109 21:48:12.101022       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.RoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=360&timeout=6m22s&timeoutSeconds=382&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101042       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?resourceVersion=382&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101062       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.CronJob: Get https://control-plane.minikube.internal:8443/apis/batch/v1beta1/cronjobs?resourceVersion=1&timeout=9m30s&timeoutSeconds=570&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101085       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ClusterRoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?resourceVersion=355&timeout=5m0s&timeoutSeconds=300&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101103       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Deployment: Get https://control-plane.minikube.internal:8443/apis/apps/v1/deployments?resourceVersion=384&timeout=8m32s&timeoutSeconds=512&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101122       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=1&timeout=9m40s&timeoutSeconds=580&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101130       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?resourceVersion=1&timeout=7m21s&timeoutSeconds=441&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101146       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PodTemplate: Get https://control-plane.minikube.internal:8443/api/v1/podtemplates?resourceVersion=1&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101155       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?resourceVersion=437&timeout=8m0s&timeoutSeconds=480&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101171       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.VolumeAttachment: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/volumeattachments?resourceVersion=1&timeout=7m41s&timeoutSeconds=461&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101184       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.LimitRange: Get https://control-plane.minikube.internal:8443/api/v1/limitranges?resourceVersion=1&timeout=5m40s&timeoutSeconds=340&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101194       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/extensions/v1beta1/replicasets?resourceVersion=382&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101217       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.PodSecurityPolicy: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/podsecuritypolicies?resourceVersion=1&timeout=7m48s&timeoutSeconds=468&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101216       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?resourceVersion=1&timeout=9m18s&timeoutSeconds=558&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101239       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1beta1/statefulsets?resourceVersion=1&timeout=9m38s&timeoutSeconds=578&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101243       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.PodSecurityPolicy: Get https://control-plane.minikube.internal:8443/apis/extensions/v1beta1/podsecuritypolicies?resourceVersion=1&timeout=7m52s&timeoutSeconds=472&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101269       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.PriorityClass: Get https://control-plane.minikube.internal:8443/apis/scheduling.k8s.io/v1beta1/priorityclasses?resourceVersion=22&timeout=8m50s&timeoutSeconds=530&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101475       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/configmaps?resourceVersion=312&timeout=8m5s&timeoutSeconds=485&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101489       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ServiceAccount: Get https://control-plane.minikube.internal:8443/api/v1/serviceaccounts?resourceVersion=356&timeout=8m56s&timeoutSeconds=536&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101550       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.Ingress: Get https://control-plane.minikube.internal:8443/apis/extensions/v1beta1/ingresses?resourceVersion=1&timeout=5m41s&timeoutSeconds=341&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101547       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?resourceVersion=432&timeout=7m40s&timeoutSeconds=460&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101585       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.NetworkPolicy: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/networkpolicies?resourceVersion=1&timeout=5m7s&timeoutSeconds=307&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101628       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.DaemonSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/daemonsets?resourceVersion=376&timeout=7m9s&timeoutSeconds=429&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101871       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Namespace: Get https://control-plane.minikube.internal:8443/api/v1/namespaces?resourceVersion=57&timeout=8m32s&timeoutSeconds=512&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101929       1 reflector.go:251] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to watch <nil>: Get https://control-plane.minikube.internal:8443/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions?resourceVersion=1&timeoutSeconds=533&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* 
	* ==> kube-controller-manager [c3f4383e623f] <==
	* I1109 21:49:20.164722       1 controller_utils.go:1034] Caches are synced for deployment controller
	* I1109 21:49:20.165210       1 controller_utils.go:1034] Caches are synced for expand controller
	* I1109 21:49:20.166154       1 controller_utils.go:1034] Caches are synced for PV protection controller
	* I1109 21:49:20.167326       1 controller_utils.go:1034] Caches are synced for persistent volume controller
	* I1109 21:49:20.169210       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"6aec4483-22d5-11eb-bfda-02423a01d930", APIVersion:"apps/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-7fc7ffbd75 to 1
	* I1109 21:49:20.170284       1 controller_utils.go:1034] Caches are synced for ReplicaSet controller
	* I1109 21:49:20.170476       1 controller_utils.go:1034] Caches are synced for daemon sets controller
	* I1109 21:49:20.174437       1 controller_utils.go:1034] Caches are synced for namespace controller
	* I1109 21:49:20.175423       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"6aeddf27-22d5-11eb-bfda-02423a01d930", APIVersion:"apps/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-66766c77dc to 1
	* I1109 21:49:20.186692       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-7fc7ffbd75", UID:"6c15958d-22d5-11eb-bfda-02423a01d930", APIVersion:"apps/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-7fc7ffbd75-kkdqv
	* I1109 21:49:20.201544       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-66766c77dc", UID:"6c15babe-22d5-11eb-bfda-02423a01d930", APIVersion:"apps/v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-66766c77dc-f24f5
	* I1109 21:49:20.251097       1 controller_utils.go:1034] Caches are synced for HPA controller
	* I1109 21:49:20.366661       1 controller_utils.go:1034] Caches are synced for taint controller
	* I1109 21:49:20.366776       1 taint_manager.go:198] Starting NoExecuteTaintManager
	* I1109 21:49:20.366835       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone: 
	* W1109 21:49:20.366903       1 node_lifecycle_controller.go:895] Missing timestamp for Node old-k8s-version-20201109134552-342799. Assuming now as a timestamp.
	* I1109 21:49:20.366939       1 node_lifecycle_controller.go:1122] Controller detected that zone  is now in state Normal.
	* I1109 21:49:20.367157       1 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-20201109134552-342799", UID:"0a391932-22d5-11eb-bca7-0242456c8bb3", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-20201109134552-342799 event: Registered Node old-k8s-version-20201109134552-342799 in Controller
	* I1109 21:49:20.381755       1 controller_utils.go:1034] Caches are synced for resource quota controller
	* I1109 21:49:20.466461       1 controller_utils.go:1034] Caches are synced for ReplicationController controller
	* I1109 21:49:20.466809       1 controller_utils.go:1034] Caches are synced for disruption controller
	* I1109 21:49:20.466832       1 disruption.go:296] Sending events to api server.
	* I1109 21:49:20.672347       1 controller_utils.go:1034] Caches are synced for garbage collector controller
	* I1109 21:49:20.714396       1 controller_utils.go:1034] Caches are synced for garbage collector controller
	* I1109 21:49:20.714425       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* 
	* ==> kube-proxy [2b37250db794] <==
	* W1109 21:49:12.468572       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	* I1109 21:49:12.492237       1 server_others.go:148] Using iptables Proxier.
	* W1109 21:49:12.492448       1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
	* I1109 21:49:12.492531       1 server_others.go:178] Tearing down inactive rules.
	* I1109 21:49:13.194253       1 server.go:464] Version: v1.13.0
	* I1109 21:49:13.204697       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:49:13.204894       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:49:13.204971       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:49:13.205233       1 config.go:102] Starting endpoints config controller
	* I1109 21:49:13.205270       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	* I1109 21:49:13.205257       1 config.go:202] Starting service config controller
	* I1109 21:49:13.205291       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	* I1109 21:49:13.305594       1 controller_utils.go:1034] Caches are synced for service config controller
	* I1109 21:49:13.305608       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	* 
	* ==> kube-proxy [48f5e104f87d] <==
	* W1109 21:47:00.403162       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	* I1109 21:47:00.482340       1 server_others.go:148] Using iptables Proxier.
	* W1109 21:47:00.482537       1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
	* I1109 21:47:00.483878       1 server_others.go:178] Tearing down inactive rules.
	* I1109 21:47:01.122054       1 server.go:464] Version: v1.13.0
	* I1109 21:47:01.131719       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:47:01.131874       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:47:01.132010       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:47:01.132714       1 config.go:102] Starting endpoints config controller
	* I1109 21:47:01.132743       1 config.go:202] Starting service config controller
	* I1109 21:47:01.132749       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	* I1109 21:47:01.132771       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	* I1109 21:47:01.233042       1 controller_utils.go:1034] Caches are synced for service config controller
	* I1109 21:47:01.233582       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	* E1109 21:48:12.096135       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.096538       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.096840       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?resourceVersion=436&timeout=7m0s&timeoutSeconds=420&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.097012       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=222&timeout=7m48s&timeoutSeconds=468&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* 
	* ==> kube-scheduler [14a3846ed127] <==
	* E1109 21:46:47.959033       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:46:47.959036       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:46:47.959109       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* I1109 21:46:49.832766       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	* I1109 21:46:49.933460       1 controller_utils.go:1034] Caches are synced for scheduler controller
	* E1109 21:48:12.096110       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.096588       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.096823       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.097027       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.097196       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.097568       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.097813       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.098019       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.098201       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.098394       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.098706       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=359&timeout=5m28s&timeoutSeconds=328&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098720       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?resourceVersion=1&timeout=9m2s&timeoutSeconds=542&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098737       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=1&timeout=5m57s&timeoutSeconds=357&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098787       1 reflector.go:251] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=432&timeoutSeconds=571&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098808       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?resourceVersion=437&timeout=5m43s&timeoutSeconds=343&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098815       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=222&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098850       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?resourceVersion=1&timeout=7m51s&timeoutSeconds=471&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098867       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?resourceVersion=1&timeout=5m29s&timeoutSeconds=329&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098911       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?resourceVersion=382&timeout=5m14s&timeoutSeconds=314&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098936       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?resourceVersion=1&timeout=5m37s&timeoutSeconds=337&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* 
	* ==> kube-scheduler [edd9fb6c5659] <==
	* I1109 21:49:01.766611       1 serving.go:318] Generated self-signed cert in-memory
	* W1109 21:49:02.765199       1 authentication.go:235] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	* W1109 21:49:02.765230       1 authentication.go:238] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	* W1109 21:49:02.765245       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	* I1109 21:49:02.768481       1 server.go:150] Version: v1.13.0
	* I1109 21:49:02.768834       1 defaults.go:210] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	* W1109 21:49:02.770672       1 authorization.go:47] Authorization is disabled
	* W1109 21:49:02.770698       1 authentication.go:55] Authentication is disabled
	* I1109 21:49:02.770712       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on 127.0.0.1:10251
	* I1109 21:49:02.771332       1 secure_serving.go:116] Serving securely on [::]:10259
	* I1109 21:49:11.473969       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	* I1109 21:49:11.574167       1 controller_utils.go:1034] Caches are synced for scheduler controller
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:48:25 UTC, end at Mon 2020-11-09 21:49:37 UTC. --
	* Nov 09 21:49:10 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:10.659475    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-8nr45" (UniqueName: "kubernetes.io/secret/17b88546-22d5-11eb-bca7-0242456c8bb3-kube-proxy-token-8nr45") pod "kube-proxy-gmlqw" (UID: "17b88546-22d5-11eb-bca7-0242456c8bb3")
	* Nov 09 21:49:10 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:10.659639    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-prkr2" (UniqueName: "kubernetes.io/secret/3e538e60-22d5-11eb-bca7-0242456c8bb3-default-token-prkr2") pod "busybox" (UID: "3e538e60-22d5-11eb-bca7-0242456c8bb3")
	* Nov 09 21:49:10 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:10.659699    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-fs6q7" (UniqueName: "kubernetes.io/secret/18d7b974-22d5-11eb-bca7-0242456c8bb3-storage-provisioner-token-fs6q7") pod "storage-provisioner" (UID: "18d7b974-22d5-11eb-bca7-0242456c8bb3")
	* Nov 09 21:49:10 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:10.659733    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17b89baa-22d5-11eb-bca7-0242456c8bb3-config-volume") pod "coredns-86c58d9df4-dgjln" (UID: "17b89baa-22d5-11eb-bca7-0242456c8bb3")
	* Nov 09 21:49:10 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:10.760882    1076 reconciler.go:154] Reconciler: start to sync state
	* Nov 09 21:49:11 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:11.686280    1076 remote_runtime.go:282] ContainerStatus "6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d
	* Nov 09 21:49:11 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:11.686369    1076 kuberuntime_container.go:397] ContainerStatus for 6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d error: rpc error: code = Unknown desc = Error: No such container: 6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d
	* Nov 09 21:49:11 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:11.686393    1076 kuberuntime_manager.go:871] getPodContainerStatuses for pod "storage-provisioner_kube-system(18d7b974-22d5-11eb-bca7-0242456c8bb3)" failed: rpc error: code = Unknown desc = Error: No such container: 6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d
	* Nov 09 21:49:11 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:11.686434    1076 generic.go:247] PLEG: Ignoring events for pod storage-provisioner/kube-system: rpc error: code = Unknown desc = Error: No such container: 6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d
	* Nov 09 21:49:12 old-k8s-version-20201109134552-342799 kubelet[1076]: W1109 21:49:12.461926    1076 pod_container_deletor.go:75] Container "c505f78f24db946b286eb25bc85c72c9018a30dcf4057653b47429bd38ad6d40" not found in pod's containers
	* Nov 09 21:49:12 old-k8s-version-20201109134552-342799 kubelet[1076]: W1109 21:49:12.480319    1076 pod_container_deletor.go:75] Container "2ed15a87982cabaaa6538e15459da05656ac7d5d54f029ac8f03acee2ac45fa7" not found in pod's containers
	* Nov 09 21:49:14 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:14.364172    1076 summary_sys_containers.go:45] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:49:14 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:14.364542    1076 helpers.go:735] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:49:14 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:14.878138    1076 kubelet_node_status.go:114] Node old-k8s-version-20201109134552-342799 was previously registered
	* Nov 09 21:49:14 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:14.878190    1076 kubelet_node_status.go:75] Successfully registered node old-k8s-version-20201109134552-342799
	* Nov 09 21:49:20 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:20.392169    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-95m6p" (UniqueName: "kubernetes.io/secret/6c190efb-22d5-11eb-bfda-02423a01d930-kubernetes-dashboard-token-95m6p") pod "kubernetes-dashboard-66766c77dc-f24f5" (UID: "6c190efb-22d5-11eb-bfda-02423a01d930")
	* Nov 09 21:49:20 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:20.392273    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/6c17ef17-22d5-11eb-bfda-02423a01d930-tmp-volume") pod "dashboard-metrics-scraper-7fc7ffbd75-kkdqv" (UID: "6c17ef17-22d5-11eb-bfda-02423a01d930")
	* Nov 09 21:49:20 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:20.392319    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-95m6p" (UniqueName: "kubernetes.io/secret/6c17ef17-22d5-11eb-bfda-02423a01d930-kubernetes-dashboard-token-95m6p") pod "dashboard-metrics-scraper-7fc7ffbd75-kkdqv" (UID: "6c17ef17-22d5-11eb-bfda-02423a01d930")
	* Nov 09 21:49:20 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:20.392351    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/6c190efb-22d5-11eb-bfda-02423a01d930-tmp-volume") pod "kubernetes-dashboard-66766c77dc-f24f5" (UID: "6c190efb-22d5-11eb-bfda-02423a01d930")
	* Nov 09 21:49:21 old-k8s-version-20201109134552-342799 kubelet[1076]: W1109 21:49:21.158772    1076 pod_container_deletor.go:75] Container "386cd3ccf8164f1e29f55682e3b5b3f5d0a7a9c625255302e6f97ef9c3b43ad9" not found in pod's containers
	* Nov 09 21:49:21 old-k8s-version-20201109134552-342799 kubelet[1076]: W1109 21:49:21.208286    1076 pod_container_deletor.go:75] Container "c1446a446bcab28078a7e963fe9f0d8ad1c71e3d0e8277b1874d2ba98ccf1f58" not found in pod's containers
	* Nov 09 21:49:24 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:24.390525    1076 summary_sys_containers.go:45] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:49:24 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:24.390599    1076 helpers.go:735] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:49:34 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:34.414226    1076 summary_sys_containers.go:45] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:49:34 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:34.414274    1076 helpers.go:735] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* 
	* ==> kubernetes-dashboard [c38f07639a34] <==
	* 2020/11/09 21:49:21 Starting overwatch
	* 2020/11/09 21:49:21 Using namespace: kubernetes-dashboard
	* 2020/11/09 21:49:21 Using in-cluster config to connect to apiserver
	* 2020/11/09 21:49:21 Using secret token for csrf signing
	* 2020/11/09 21:49:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	* 2020/11/09 21:49:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	* 2020/11/09 21:49:21 Successful initial request to the apiserver, version: v1.13.0
	* 2020/11/09 21:49:21 Generating JWE encryption key
	* 2020/11/09 21:49:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	* 2020/11/09 21:49:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	* 2020/11/09 21:49:21 Initializing JWE encryption key from synchronized object
	* 2020/11/09 21:49:21 Creating in-cluster Sidecar client
	* 2020/11/09 21:49:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	* 2020/11/09 21:49:21 Serving insecurely on HTTP port: 9090
	* 
	* ==> storage-provisioner [6271ad6709bb] <==
	* 
	* ==> storage-provisioner [b3aa2f145af5] <==
	* I1109 21:47:40.011460       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1109 21:47:57.414254       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1109 21:47:57.414415       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20201109134552-342799_a4fa827a-e551-4859-a5e9-cfcfad954e71!
	* I1109 21:47:57.414466       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18d34421-22d5-11eb-bca7-0242456c8bb3", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20201109134552-342799_a4fa827a-e551-4859-a5e9-cfcfad954e71 became leader
	* I1109 21:47:57.514778       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20201109134552-342799_a4fa827a-e551-4859-a5e9-cfcfad954e71!

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799
helpers_test.go:255: (dbg) Run:  kubectl --context old-k8s-version-20201109134552-342799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context old-k8s-version-20201109134552-342799 describe pod 
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context old-k8s-version-20201109134552-342799 describe pod : exit status 1 (89.655738ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context old-k8s-version-20201109134552-342799 describe pod : exit status 1
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect old-k8s-version-20201109134552-342799
helpers_test.go:229: (dbg) docker inspect old-k8s-version-20201109134552-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d",
	        "Created": "2020-11-09T21:45:55.430862775Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 540371,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:48:24.860311086Z",
	            "FinishedAt": "2020-11-09T21:48:22.797870491Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d/hostname",
	        "HostsPath": "/var/lib/docker/containers/cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d/hosts",
	        "LogPath": "/var/lib/docker/containers/cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d/cfd70b73dfb6db0516841fa9a91461380a57fa796842d257bef788283c2e545d-json.log",
	        "Name": "/old-k8s-version-20201109134552-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20201109134552-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20201109134552-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc1e54b698a8405d0672b9789553d3b556ee40f2e8de5116a22175e5af119a97-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc1e54b698a8405d0672b9789553d3b556ee40f2e8de5116a22175e5af119a97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc1e54b698a8405d0672b9789553d3b556ee40f2e8de5116a22175e5af119a97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc1e54b698a8405d0672b9789553d3b556ee40f2e8de5116a22175e5af119a97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20201109134552-342799",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20201109134552-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20201109134552-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20201109134552-342799",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20201109134552-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "235d84e1fae33e1157c1cda4e2694b9c734de79c2775f5a18735ccf6c987e4c3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/235d84e1fae3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20201109134552-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cfd70b73dfb6"
	                    ],
	                    "NetworkID": "6a798602d87272d4795c7b386775550e088a3d5682b56de75a4efc87906b3f55",
	                    "EndpointID": "c078af9e3359035362d7ce4a7ac9e2f7db595874c1d5525280ef131798ff18d7",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799
helpers_test.go:238: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20201109134552-342799 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20201109134552-342799 logs -n 25: (3.295519211s)
helpers_test.go:246: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:48:25 UTC, end at Mon 2020-11-09 21:49:39 UTC. --
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 systemd[1]: docker.service: Succeeded.
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 systemd[1]: Stopped Docker Application Container Engine.
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 systemd[1]: Starting Docker Application Container Engine...
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.803323979Z" level=info msg="Starting up"
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.806203925Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.806242325Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.806274753Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.806288685Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.808144170Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.808181062Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.808297650Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.808326459Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:48:36 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:36.842328484Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Nov 09 21:48:37 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:37.851319750Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 09 21:48:37 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:37.851524034Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 09 21:48:37 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:37.851545075Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 09 21:48:37 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:37.851887630Z" level=info msg="Loading containers: start."
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.065436280Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.137374335Z" level=info msg="Loading containers: done."
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.166576619Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.166695679Z" level=info msg="Daemon has completed initialization"
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.189655428Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:48:38.189792220Z" level=info msg="API listen on [::]:2376"
	* Nov 09 21:48:38 old-k8s-version-20201109134552-342799 systemd[1]: Started Docker Application Container Engine.
	* Nov 09 21:49:12 old-k8s-version-20201109134552-342799 dockerd[519]: time="2020-11-09T21:49:12.480441165Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                             CREATED              STATE               NAME                        ATTEMPT             POD ID
	* c38f07639a34a       503bc4b7440b9                                                                     18 seconds ago       Running             kubernetes-dashboard        0                   c1446a446bcab
	* 6b8f59db65677       86262685d9abb                                                                     18 seconds ago       Running             dashboard-metrics-scraper   0                   386cd3ccf8164
	* 3c102ac79db19       f59dcacceff45                                                                     27 seconds ago       Running             coredns                     1                   c505f78f24db9
	* 91e52c326bc0a       56cc512116c8f                                                                     27 seconds ago       Running             busybox                     1                   2ed15a87982ca
	* 6271ad6709bba       bad58561c4be7                                                                     28 seconds ago       Running             storage-provisioner         2                   6d49b57691d98
	* 2b37250db7949       8fa56d18961fa                                                                     28 seconds ago       Running             kube-proxy                  1                   8e5c3490bba84
	* 4c6596774f916       3cab8e1b9802c                                                                     39 seconds ago       Running             etcd                        1                   7e610759eb0bf
	* ea9697518d254       f1ff9b7e3d6e9                                                                     39 seconds ago       Running             kube-apiserver              1                   76445a009ce90
	* c3f4383e623fe       d82530ead066d                                                                     39 seconds ago       Running             kube-controller-manager     2                   3130f7c56ccb8
	* edd9fb6c5659e       9508b7d8008de                                                                     39 seconds ago       Running             kube-scheduler              1                   5c12e73e3f60a
	* 94ed168200448       busybox@sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1   About a minute ago   Exited              busybox                     0                   0d08e9bc45fd2
	* b3aa2f145af59       bad58561c4be7                                                                     2 minutes ago        Exited              storage-provisioner         1                   64d8642215328
	* 903dda65df79c       f59dcacceff45                                                                     2 minutes ago        Exited              coredns                     0                   a969b6abbde31
	* 48f5e104f87d5       8fa56d18961fa                                                                     2 minutes ago        Exited              kube-proxy                  0                   e7fb33fbd7cfb
	* 2e4cf189ac411       d82530ead066d                                                                     2 minutes ago        Exited              kube-controller-manager     1                   e7651a8560325
	* 14a3846ed127e       9508b7d8008de                                                                     3 minutes ago        Exited              kube-scheduler              0                   2eae0dc9b703d
	* 41e5fb2679168       3cab8e1b9802c                                                                     3 minutes ago        Exited              etcd                        0                   70fd368585e37
	* 7bbad7d127c23       f1ff9b7e3d6e9                                                                     3 minutes ago        Exited              kube-apiserver              0                   f72f54407d686
	* 
	* ==> coredns [3c102ac79db1] <==
	* .:53
	* 2020-11-09T21:49:18.064Z [INFO] CoreDNS-1.2.6
	* 2020-11-09T21:49:18.064Z [INFO] linux/amd64, go1.11.2, 756749c
	* CoreDNS-1.2.6
	* linux/amd64, go1.11.2, 756749c
	*  [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
	* 
	* ==> coredns [903dda65df79] <==
	* .:53
	* 2020-11-09T21:47:05.855Z [INFO] CoreDNS-1.2.6
	* 2020-11-09T21:47:05.855Z [INFO] linux/amd64, go1.11.2, 756749c
	* CoreDNS-1.2.6
	* linux/amd64, go1.11.2, 756749c
	*  [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
	* E1109 21:47:30.856125       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1109 21:47:30.856140       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1109 21:47:30.856232       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1109 21:48:12.094910       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.095262       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.095473       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.095739       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=436&timeoutSeconds=319&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:48:12.095791       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=57&timeoutSeconds=506&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	* E1109 21:48:12.095840       1 reflector.go:322] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=222&timeoutSeconds=427&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	* [INFO] SIGTERM: Shutting down servers then terminating
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20201109134552-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/hostname=old-k8s-version-20201109134552-342799
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=old-k8s-version-20201109134552-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_46_51_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:46:35 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:49:35 +0000   Mon, 09 Nov 2020 21:46:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:49:35 +0000   Mon, 09 Nov 2020 21:46:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:49:35 +0000   Mon, 09 Nov 2020 21:46:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:49:35 +0000   Mon, 09 Nov 2020 21:46:22 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.59.16
	*   Hostname:    old-k8s-version-20201109134552-342799
	* Capacity:
	*  cpu:                8
	*  ephemeral-storage:  515928484Ki
	*  hugepages-1Gi:      0
	*  hugepages-2Mi:      0
	*  memory:             30887000Ki
	*  pods:               110
	* Allocatable:
	*  cpu:                8
	*  ephemeral-storage:  515928484Ki
	*  hugepages-1Gi:      0
	*  hugepages-2Mi:      0
	*  memory:             30887000Ki
	*  pods:               110
	* System Info:
	*  Machine ID:                 2a99baa5ff9243fda609f5b1d8a9cb24
	*  System UUID:                8b0e325c-05be-4325-a5e0-2af926fb115b
	*  Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*  Kernel Version:             4.9.0-14-amd64
	*  OS Image:                   Ubuntu 20.04.1 LTS
	*  Operating System:           linux
	*  Architecture:               amd64
	*  Container Runtime Version:  docker://19.3.13
	*  Kubelet Version:            v1.13.0
	*  Kube-Proxy Version:         v1.13.0
	* Non-terminated Pods:         (10 in total)
	*   Namespace                  Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                  ----                                                             ------------  ----------  ---------------  -------------  ---
	*   default                    busybox                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	*   kube-system                coredns-86c58d9df4-dgjln                                         100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m42s
	*   kube-system                etcd-old-k8s-version-20201109134552-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	*   kube-system                kube-apiserver-old-k8s-version-20201109134552-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	*   kube-system                kube-controller-manager-old-k8s-version-20201109134552-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m52s
	*   kube-system                kube-proxy-gmlqw                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	*   kube-system                kube-scheduler-old-k8s-version-20201109134552-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	*   kube-system                storage-provisioner                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	*   kubernetes-dashboard       dashboard-metrics-scraper-7fc7ffbd75-kkdqv                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	*   kubernetes-dashboard       kubernetes-dashboard-66766c77dc-f24f5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                650m (8%)  0 (0%)
	*   memory             70Mi (0%)  170Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                    From                                               Message
	*   ----    ------                   ----                   ----                                               -------
	*   Normal  NodeHasSufficientMemory  3m20s (x8 over 3m21s)  kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    3m20s (x7 over 3m21s)  kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     3m20s (x8 over 3m21s)  kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 2m39s                  kube-proxy, old-k8s-version-20201109134552-342799  Starting kube-proxy.
	*   Normal  Starting                 56s                    kubelet, old-k8s-version-20201109134552-342799     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  56s                    kubelet, old-k8s-version-20201109134552-342799     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientPID     53s (x7 over 56s)      kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeHasSufficientMemory  51s (x8 over 56s)      kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    51s (x8 over 56s)      kubelet, old-k8s-version-20201109134552-342799     Node old-k8s-version-20201109134552-342799 status is now: NodeHasNoDiskPressure
	*   Normal  Starting                 27s                    kube-proxy, old-k8s-version-20201109134552-342799  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.110709] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +27.722706] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:40] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:41] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +4.475323] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:42] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +28.080349] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:44] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +34.380797] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +1.362576] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:45] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +8.618364] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:46] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +10.217485] cgroup: cgroup2: unknown option "nsdelegate"
	* [Nov 9 21:48] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +7.704784] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	* [  +0.000035] ll header: 00000000: ff ff ff ff ff ff 0e ae 30 cf cc 1c 08 06        ........0.....
	* [  +0.000006] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	* [  +0.000001] ll header: 00000000: ff ff ff ff ff ff 0e ae 30 cf cc 1c 08 06        ........0.....
	* [  +6.373099] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +9.034581] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth9dbc4a3c
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 42 95 d6 6a 72 32 08 06        ......B..jr2..
	* [Nov 9 21:49] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth81f8fb18
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a a8 9e 9e b5 6a 08 06        ......:....j..
	* [ +11.663987] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [41e5fb267916] <==
	* 2020-11-09 21:47:39.408107 W | etcdserver: read-only range request "key:\"/registry/secrets/kube-system/storage-provisioner-token-fs6q7\" " with result "range_response_count:1 size:2451" took too long (1.391354725s) to execute
	* 2020-11-09 21:47:39.408412 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/coredns\" " with result "range_response_count:1 size:217" took too long (1.392220711s) to execute
	* 2020-11-09 21:47:39.408636 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1.\" " with result "range_response_count:1 size:579" took too long (1.393558386s) to execute
	* 2020-11-09 21:47:39.409468 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.certificates.k8s.io\" " with result "range_response_count:1 size:662" took too long (1.394392291s) to execute
	* 2020-11-09 21:47:39.409823 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.storage.k8s.io\" " with result "range_response_count:1 size:647" took too long (1.394819448s) to execute
	* 2020-11-09 21:47:39.410036 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.apiextensions.k8s.io\" " with result "range_response_count:1 size:665" took too long (1.395071963s) to execute
	* 2020-11-09 21:47:39.410192 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/cronjob-controller\" " with result "range_response_count:1 size:251" took too long (1.395490809s) to execute
	* 2020-11-09 21:47:39.410247 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" " with result "range_response_count:1 size:272" took too long (1.395417143s) to execute
	* 2020-11-09 21:47:39.410356 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/coredns\" " with result "range_response_count:1 size:217" took too long (1.395527521s) to execute
	* 2020-11-09 21:47:39.410427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/coredns\" " with result "range_response_count:1 size:217" took too long (1.395696613s) to execute
	* 2020-11-09 21:47:39.410515 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1.apps\" " with result "range_response_count:1 size:603" took too long (1.395567066s) to execute
	* 2020-11-09 21:47:49.558815 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-20201109134552-342799\" " with result "range_response_count:1 size:3096" took too long (239.801958ms) to execute
	* 2020-11-09 21:47:54.779990 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (274.412544ms) to execute
	* 2020-11-09 21:48:01.416318 W | wal: sync duration of 2.10045279s, expected less than 1s
	* 2020-11-09 21:48:01.430860 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:767" took too long (2.005294739s) to execute
	* 2020-11-09 21:48:01.441664 W | etcdserver: read-only range request "key:\"/registry/secrets/kube-system/pod-garbage-collector-token-kxs7r\" " with result "range_response_count:1 size:2465" took too long (1.977680015s) to execute
	* 2020-11-09 21:48:01.441752 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:170" took too long (1.868179144s) to execute
	* 2020-11-09 21:48:01.442508 W | etcdserver: read-only range request "key:\"/registry/secrets/kube-system/cronjob-controller-token-lds96\" " with result "range_response_count:1 size:2444" took too long (1.867677045s) to execute
	* 2020-11-09 21:48:12.269849 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/09 21:48:12 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: getsockopt: connection refused"; Reconnecting to {127.0.0.1:2379 0  <nil>}
	* WARNING: 2020/11/09 21:48:12 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 192.168.59.16:2379: getsockopt: connection refused"; Reconnecting to {192.168.59.16:2379 0  <nil>}
	* 2020-11-09 21:48:13.270391 I | etcdserver: skipped leadership transfer for single member cluster
	* WARNING: 2020/11/09 21:48:13 grpc: addrConn.transportMonitor exits due to: context canceled
	* WARNING: 2020/11/09 21:48:13 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: operation was canceled"; Reconnecting to {127.0.0.1:2379 0  <nil>}
	* WARNING: 2020/11/09 21:48:13 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
	* 
	* ==> etcd [4c6596774f91] <==
	* 2020-11-09 21:49:00.986368 I | etcdserver: data dir = /var/lib/minikube/etcd
	* 2020-11-09 21:49:00.986374 I | etcdserver: member dir = /var/lib/minikube/etcd/member
	* 2020-11-09 21:49:00.986379 I | etcdserver: heartbeat = 100ms
	* 2020-11-09 21:49:00.986384 I | etcdserver: election = 1000ms
	* 2020-11-09 21:49:00.986388 I | etcdserver: snapshot count = 10000
	* 2020-11-09 21:49:00.986406 I | etcdserver: advertise client URLs = https://192.168.59.16:2379
	* 2020-11-09 21:49:01.059521 I | etcdserver: restarting member 47984c33979a6f91 in cluster 79741b01b410835d at commit index 473
	* 2020-11-09 21:49:01.060749 I | raft: 47984c33979a6f91 became follower at term 2
	* 2020-11-09 21:49:01.060806 I | raft: newRaft 47984c33979a6f91 [peers: [], term: 2, commit: 473, applied: 0, lastindex: 473, lastterm: 2]
	* 2020-11-09 21:49:01.159975 W | auth: simple token is not cryptographically signed
	* 2020-11-09 21:49:01.172322 I | etcdserver: starting server... [version: 3.2.24, cluster version: to_be_decided]
	* 2020-11-09 21:49:01.173282 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true
	* 2020-11-09 21:49:01.192874 I | etcdserver/membership: added member 47984c33979a6f91 [https://192.168.59.16:2380] to cluster 79741b01b410835d
	* 2020-11-09 21:49:01.193084 N | etcdserver/membership: set the initial cluster version to 3.2
	* 2020-11-09 21:49:01.193828 I | etcdserver/api: enabled capabilities for version 3.2
	* 2020-11-09 21:49:02.861410 I | raft: 47984c33979a6f91 is starting a new election at term 2
	* 2020-11-09 21:49:02.861522 I | raft: 47984c33979a6f91 became candidate at term 3
	* 2020-11-09 21:49:02.861545 I | raft: 47984c33979a6f91 received MsgVoteResp from 47984c33979a6f91 at term 3
	* 2020-11-09 21:49:02.861560 I | raft: 47984c33979a6f91 became leader at term 3
	* 2020-11-09 21:49:02.861567 I | raft: raft.node: 47984c33979a6f91 elected leader 47984c33979a6f91 at term 3
	* 2020-11-09 21:49:02.861996 I | etcdserver: published {Name:old-k8s-version-20201109134552-342799 ClientURLs:[https://192.168.59.16:2379]} to cluster 79741b01b410835d
	* 2020-11-09 21:49:02.862048 I | embed: ready to serve client requests
	* 2020-11-09 21:49:02.862139 I | embed: ready to serve client requests
	* 2020-11-09 21:49:02.862414 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:49:02.862463 I | embed: serving client requests on 192.168.59.16:2379
	* 
	* ==> kernel <==
	*  21:49:40 up  1:32,  0 users,  load average: 6.41, 9.57, 8.18
	* Linux old-k8s-version-20201109134552-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [7bbad7d127c2] <==
	* E1109 21:47:39.450330       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	* I1109 21:48:01.431867       1 trace.go:76] Trace[1321433626]: "Get /api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" (started: 2020-11-09 21:47:59.424854137 +0000 UTC m=+98.336861532) (total time: 2.006968771s):
	* Trace[1321433626]: [2.006841504s] [2.006837227s] About to write a response
	* I1109 21:48:01.434593       1 trace.go:76] Trace[1445030283]: "GuaranteedUpdate etcd3: *core.Node" (started: 2020-11-09 21:47:59.608062156 +0000 UTC m=+98.520069562) (total time: 1.82649011s):
	* Trace[1445030283]: [1.826413605s] [1.825380486s] Transaction committed
	* I1109 21:48:01.434740       1 trace.go:76] Trace[356942102]: "Patch /api/v1/nodes/old-k8s-version-20201109134552-342799/status" (started: 2020-11-09 21:47:59.6079159 +0000 UTC m=+98.519923302) (total time: 1.826804652s):
	* Trace[356942102]: [1.826714919s] [1.825914808s] Object stored in database
	* I1109 21:48:01.442829       1 trace.go:76] Trace[1452097955]: "Get /api/v1/namespaces/default" (started: 2020-11-09 21:47:59.572808654 +0000 UTC m=+98.484816049) (total time: 1.869976221s):
	* Trace[1452097955]: [1.869907179s] [1.869902961s] About to write a response
	* I1109 21:48:01.444153       1 trace.go:76] Trace[1325082032]: "Get /api/v1/namespaces/kube-system/secrets/pod-garbage-collector-token-kxs7r" (started: 2020-11-09 21:47:59.463397511 +0000 UTC m=+98.375404918) (total time: 1.980723642s):
	* Trace[1325082032]: [1.980663103s] [1.980657375s] About to write a response
	* I1109 21:48:01.444422       1 trace.go:76] Trace[999934736]: "Get /api/v1/namespaces/kube-system/secrets/cronjob-controller-token-lds96" (started: 2020-11-09 21:47:59.573623932 +0000 UTC m=+98.485631340) (total time: 1.870777967s):
	* Trace[999934736]: [1.870739068s] [1.870735562s] About to write a response
	* I1109 21:48:12.090966       1 controller.go:170] Shutting down kubernetes service endpoint reconciler
	* I1109 21:48:12.091696       1 autoregister_controller.go:160] Shutting down autoregister controller
	* I1109 21:48:12.091861       1 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
	* I1109 21:48:12.091895       1 naming_controller.go:295] Shutting down NamingConditionController
	* I1109 21:48:12.091912       1 customresource_discovery_controller.go:214] Shutting down DiscoveryController
	* I1109 21:48:12.091929       1 available_controller.go:295] Shutting down AvailableConditionController
	* I1109 21:48:12.091944       1 establishing_controller.go:84] Shutting down EstablishingController
	* I1109 21:48:12.091965       1 crd_finalizer.go:254] Shutting down CRDFinalizer
	* I1109 21:48:12.092197       1 crdregistration_controller.go:143] Shutting down crd-autoregister controller
	* I1109 21:48:12.092587       1 controller.go:90] Shutting down OpenAPI AggregationController
	* I1109 21:48:12.095774       1 secure_serving.go:156] Stopped listening on [::]:8443
	* E1109 21:48:12.099665       1 controller.go:172] Get https://localhost:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:8443: connect: connection refused
	* 
	* ==> kube-apiserver [ea9697518d25] <==
	* I1109 21:49:10.408130       1 establishing_controller.go:73] Starting EstablishingController
	* I1109 21:49:10.557801       1 controller_utils.go:1034] Caches are synced for crd-autoregister controller
	* I1109 21:49:10.558403       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1109 21:49:10.558658       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1109 21:49:10.559240       1 cache.go:39] Caches are synced for autoregister controller
	* I1109 21:49:11.462315       1 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
	* I1109 21:49:14.579929       1 trace.go:76] Trace[1731639518]: "Create /api/v1/nodes" (started: 2020-11-09 21:49:10.560896859 +0000 UTC m=+9.640599358) (total time: 4.018928932s):
	* Trace[1731639518]: [4.010599196s] [4.001077536s] About to store object in database
	* I1109 21:49:14.580846       1 trace.go:76] Trace[135405931]: "Create /api/v1/namespaces/default/events" (started: 2020-11-09 21:49:10.460016707 +0000 UTC m=+9.539719178) (total time: 4.120757806s):
	* Trace[135405931]: [4.112259385s] [4.10942997s] About to store object in database
	* I1109 21:49:16.605238       1 trace.go:76] Trace[1046742352]: "Create /apis/authentication.k8s.io/v1/tokenreviews" (started: 2020-11-09 21:49:12.597932741 +0000 UTC m=+11.677635207) (total time: 4.007264932s):
	* Trace[1046742352]: [4.00151278s] [4.001278621s] About to store object in database
	* I1109 21:49:16.927165       1 trace.go:76] Trace[1395323660]: "Create /apis/rbac.authorization.k8s.io/v1/clusterroles" (started: 2020-11-09 21:49:12.922141616 +0000 UTC m=+12.001844101) (total time: 4.00496727s):
	* Trace[1395323660]: [4.001994572s] [4.001028301s] About to store object in database
	* I1109 21:49:16.943646       1 controller.go:608] quota admission added evaluator for: serviceaccounts
	* I1109 21:49:16.969347       1 controller.go:608] quota admission added evaluator for: deployments.apps
	* I1109 21:49:17.022152       1 controller.go:608] quota admission added evaluator for: daemonsets.apps
	* I1109 21:49:17.037697       1 controller.go:608] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I1109 21:49:17.044720       1 controller.go:608] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* I1109 21:49:17.214852       1 trace.go:76] Trace[2072403582]: "Create /api/v1/namespaces/default/events" (started: 2020-11-09 21:49:13.206353131 +0000 UTC m=+12.286055746) (total time: 4.008463842s):
	* Trace[2072403582]: [4.005072234s] [4.000992853s] About to store object in database
	* I1109 21:49:18.585691       1 trace.go:76] Trace[2102421044]: "Create /api/v1/namespaces/default/events" (started: 2020-11-09 21:49:14.582334996 +0000 UTC m=+13.662037470) (total time: 4.00331638s):
	* Trace[2102421044]: [4.001028525s] [4.00095562s] About to store object in database
	* I1109 21:49:20.094833       1 controller.go:608] quota admission added evaluator for: endpoints
	* I1109 21:49:20.166732       1 controller.go:608] quota admission added evaluator for: replicasets.apps
	* 
	* ==> kube-controller-manager [2e4cf189ac41] <==
	* E1109 21:48:12.101022       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.RoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=360&timeout=6m22s&timeoutSeconds=382&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101042       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?resourceVersion=382&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101062       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.CronJob: Get https://control-plane.minikube.internal:8443/apis/batch/v1beta1/cronjobs?resourceVersion=1&timeout=9m30s&timeoutSeconds=570&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101085       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ClusterRoleBinding: Get https://control-plane.minikube.internal:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?resourceVersion=355&timeout=5m0s&timeoutSeconds=300&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101103       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Deployment: Get https://control-plane.minikube.internal:8443/apis/apps/v1/deployments?resourceVersion=384&timeout=8m32s&timeoutSeconds=512&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101122       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=1&timeout=9m40s&timeoutSeconds=580&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101130       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?resourceVersion=1&timeout=7m21s&timeoutSeconds=441&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101146       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PodTemplate: Get https://control-plane.minikube.internal:8443/api/v1/podtemplates?resourceVersion=1&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101155       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?resourceVersion=437&timeout=8m0s&timeoutSeconds=480&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101171       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.VolumeAttachment: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/volumeattachments?resourceVersion=1&timeout=7m41s&timeoutSeconds=461&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101184       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.LimitRange: Get https://control-plane.minikube.internal:8443/api/v1/limitranges?resourceVersion=1&timeout=5m40s&timeoutSeconds=340&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101194       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/extensions/v1beta1/replicasets?resourceVersion=382&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101217       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.PodSecurityPolicy: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/podsecuritypolicies?resourceVersion=1&timeout=7m48s&timeoutSeconds=468&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101216       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?resourceVersion=1&timeout=9m18s&timeoutSeconds=558&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101239       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1beta1/statefulsets?resourceVersion=1&timeout=9m38s&timeoutSeconds=578&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101243       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.PodSecurityPolicy: Get https://control-plane.minikube.internal:8443/apis/extensions/v1beta1/podsecuritypolicies?resourceVersion=1&timeout=7m52s&timeoutSeconds=472&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101269       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.PriorityClass: Get https://control-plane.minikube.internal:8443/apis/scheduling.k8s.io/v1beta1/priorityclasses?resourceVersion=22&timeout=8m50s&timeoutSeconds=530&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101475       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/configmaps?resourceVersion=312&timeout=8m5s&timeoutSeconds=485&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101489       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ServiceAccount: Get https://control-plane.minikube.internal:8443/api/v1/serviceaccounts?resourceVersion=356&timeout=8m56s&timeoutSeconds=536&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101550       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.Ingress: Get https://control-plane.minikube.internal:8443/apis/extensions/v1beta1/ingresses?resourceVersion=1&timeout=5m41s&timeoutSeconds=341&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101547       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?resourceVersion=432&timeout=7m40s&timeoutSeconds=460&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101585       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.NetworkPolicy: Get https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/networkpolicies?resourceVersion=1&timeout=5m7s&timeoutSeconds=307&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101628       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.DaemonSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/daemonsets?resourceVersion=376&timeout=7m9s&timeoutSeconds=429&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101871       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Namespace: Get https://control-plane.minikube.internal:8443/api/v1/namespaces?resourceVersion=57&timeout=8m32s&timeoutSeconds=512&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.101929       1 reflector.go:251] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to watch <nil>: Get https://control-plane.minikube.internal:8443/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions?resourceVersion=1&timeoutSeconds=533&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* 
	* ==> kube-controller-manager [c3f4383e623f] <==
	* I1109 21:49:20.164722       1 controller_utils.go:1034] Caches are synced for deployment controller
	* I1109 21:49:20.165210       1 controller_utils.go:1034] Caches are synced for expand controller
	* I1109 21:49:20.166154       1 controller_utils.go:1034] Caches are synced for PV protection controller
	* I1109 21:49:20.167326       1 controller_utils.go:1034] Caches are synced for persistent volume controller
	* I1109 21:49:20.169210       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"6aec4483-22d5-11eb-bfda-02423a01d930", APIVersion:"apps/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-7fc7ffbd75 to 1
	* I1109 21:49:20.170284       1 controller_utils.go:1034] Caches are synced for ReplicaSet controller
	* I1109 21:49:20.170476       1 controller_utils.go:1034] Caches are synced for daemon sets controller
	* I1109 21:49:20.174437       1 controller_utils.go:1034] Caches are synced for namespace controller
	* I1109 21:49:20.175423       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"6aeddf27-22d5-11eb-bfda-02423a01d930", APIVersion:"apps/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-66766c77dc to 1
	* I1109 21:49:20.186692       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-7fc7ffbd75", UID:"6c15958d-22d5-11eb-bfda-02423a01d930", APIVersion:"apps/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-7fc7ffbd75-kkdqv
	* I1109 21:49:20.201544       1 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-66766c77dc", UID:"6c15babe-22d5-11eb-bfda-02423a01d930", APIVersion:"apps/v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-66766c77dc-f24f5
	* I1109 21:49:20.251097       1 controller_utils.go:1034] Caches are synced for HPA controller
	* I1109 21:49:20.366661       1 controller_utils.go:1034] Caches are synced for taint controller
	* I1109 21:49:20.366776       1 taint_manager.go:198] Starting NoExecuteTaintManager
	* I1109 21:49:20.366835       1 node_lifecycle_controller.go:1222] Initializing eviction metric for zone: 
	* W1109 21:49:20.366903       1 node_lifecycle_controller.go:895] Missing timestamp for Node old-k8s-version-20201109134552-342799. Assuming now as a timestamp.
	* I1109 21:49:20.366939       1 node_lifecycle_controller.go:1122] Controller detected that zone  is now in state Normal.
	* I1109 21:49:20.367157       1 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-20201109134552-342799", UID:"0a391932-22d5-11eb-bca7-0242456c8bb3", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-20201109134552-342799 event: Registered Node old-k8s-version-20201109134552-342799 in Controller
	* I1109 21:49:20.381755       1 controller_utils.go:1034] Caches are synced for resource quota controller
	* I1109 21:49:20.466461       1 controller_utils.go:1034] Caches are synced for ReplicationController controller
	* I1109 21:49:20.466809       1 controller_utils.go:1034] Caches are synced for disruption controller
	* I1109 21:49:20.466832       1 disruption.go:296] Sending events to api server.
	* I1109 21:49:20.672347       1 controller_utils.go:1034] Caches are synced for garbage collector controller
	* I1109 21:49:20.714396       1 controller_utils.go:1034] Caches are synced for garbage collector controller
	* I1109 21:49:20.714425       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* 
	* ==> kube-proxy [2b37250db794] <==
	* W1109 21:49:12.468572       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	* I1109 21:49:12.492237       1 server_others.go:148] Using iptables Proxier.
	* W1109 21:49:12.492448       1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
	* I1109 21:49:12.492531       1 server_others.go:178] Tearing down inactive rules.
	* I1109 21:49:13.194253       1 server.go:464] Version: v1.13.0
	* I1109 21:49:13.204697       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:49:13.204894       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:49:13.204971       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:49:13.205233       1 config.go:102] Starting endpoints config controller
	* I1109 21:49:13.205270       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	* I1109 21:49:13.205257       1 config.go:202] Starting service config controller
	* I1109 21:49:13.205291       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	* I1109 21:49:13.305594       1 controller_utils.go:1034] Caches are synced for service config controller
	* I1109 21:49:13.305608       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	* 
	* ==> kube-proxy [48f5e104f87d] <==
	* W1109 21:47:00.403162       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	* I1109 21:47:00.482340       1 server_others.go:148] Using iptables Proxier.
	* W1109 21:47:00.482537       1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
	* I1109 21:47:00.483878       1 server_others.go:178] Tearing down inactive rules.
	* I1109 21:47:01.122054       1 server.go:464] Version: v1.13.0
	* I1109 21:47:01.131719       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:47:01.131874       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:47:01.132010       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:47:01.132714       1 config.go:102] Starting endpoints config controller
	* I1109 21:47:01.132743       1 config.go:202] Starting service config controller
	* I1109 21:47:01.132749       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	* I1109 21:47:01.132771       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	* I1109 21:47:01.233042       1 controller_utils.go:1034] Caches are synced for service config controller
	* I1109 21:47:01.233582       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	* E1109 21:48:12.096135       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.096538       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=11, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.096840       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?resourceVersion=436&timeout=7m0s&timeoutSeconds=420&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.097012       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=222&timeout=7m48s&timeoutSeconds=468&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* 
	* ==> kube-scheduler [14a3846ed127] <==
	* E1109 21:46:47.959033       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:46:47.959036       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:46:47.959109       1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* I1109 21:46:49.832766       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	* I1109 21:46:49.933460       1 controller_utils.go:1034] Caches are synced for scheduler controller
	* E1109 21:48:12.096110       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.096588       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.096823       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.097027       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.097196       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.097568       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.097813       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.098019       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.098201       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.098394       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=399, ErrCode=NO_ERROR, debug=""
	* E1109 21:48:12.098706       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=359&timeout=5m28s&timeoutSeconds=328&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098720       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?resourceVersion=1&timeout=9m2s&timeoutSeconds=542&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098737       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=1&timeout=5m57s&timeoutSeconds=357&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098787       1 reflector.go:251] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=432&timeoutSeconds=571&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098808       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?resourceVersion=437&timeout=5m43s&timeoutSeconds=343&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098815       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=222&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098850       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?resourceVersion=1&timeout=7m51s&timeoutSeconds=471&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098867       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?resourceVersion=1&timeout=5m29s&timeoutSeconds=329&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098911       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?resourceVersion=382&timeout=5m14s&timeoutSeconds=314&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* E1109 21:48:12.098936       1 reflector.go:251] k8s.io/client-go/informers/factory.go:132: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?resourceVersion=1&timeout=5m37s&timeoutSeconds=337&watch=true: dial tcp 192.168.59.16:8443: connect: connection refused
	* 
	* ==> kube-scheduler [edd9fb6c5659] <==
	* I1109 21:49:01.766611       1 serving.go:318] Generated self-signed cert in-memory
	* W1109 21:49:02.765199       1 authentication.go:235] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	* W1109 21:49:02.765230       1 authentication.go:238] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	* W1109 21:49:02.765245       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	* I1109 21:49:02.768481       1 server.go:150] Version: v1.13.0
	* I1109 21:49:02.768834       1 defaults.go:210] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	* W1109 21:49:02.770672       1 authorization.go:47] Authorization is disabled
	* W1109 21:49:02.770698       1 authentication.go:55] Authentication is disabled
	* I1109 21:49:02.770712       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on 127.0.0.1:10251
	* I1109 21:49:02.771332       1 secure_serving.go:116] Serving securely on [::]:10259
	* I1109 21:49:11.473969       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	* I1109 21:49:11.574167       1 controller_utils.go:1034] Caches are synced for scheduler controller
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:48:25 UTC, end at Mon 2020-11-09 21:49:41 UTC. --
	* Nov 09 21:49:10 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:10.659475    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-8nr45" (UniqueName: "kubernetes.io/secret/17b88546-22d5-11eb-bca7-0242456c8bb3-kube-proxy-token-8nr45") pod "kube-proxy-gmlqw" (UID: "17b88546-22d5-11eb-bca7-0242456c8bb3")
	* Nov 09 21:49:10 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:10.659639    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-prkr2" (UniqueName: "kubernetes.io/secret/3e538e60-22d5-11eb-bca7-0242456c8bb3-default-token-prkr2") pod "busybox" (UID: "3e538e60-22d5-11eb-bca7-0242456c8bb3")
	* Nov 09 21:49:10 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:10.659699    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-fs6q7" (UniqueName: "kubernetes.io/secret/18d7b974-22d5-11eb-bca7-0242456c8bb3-storage-provisioner-token-fs6q7") pod "storage-provisioner" (UID: "18d7b974-22d5-11eb-bca7-0242456c8bb3")
	* Nov 09 21:49:10 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:10.659733    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17b89baa-22d5-11eb-bca7-0242456c8bb3-config-volume") pod "coredns-86c58d9df4-dgjln" (UID: "17b89baa-22d5-11eb-bca7-0242456c8bb3")
	* Nov 09 21:49:10 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:10.760882    1076 reconciler.go:154] Reconciler: start to sync state
	* Nov 09 21:49:11 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:11.686280    1076 remote_runtime.go:282] ContainerStatus "6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d
	* Nov 09 21:49:11 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:11.686369    1076 kuberuntime_container.go:397] ContainerStatus for 6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d error: rpc error: code = Unknown desc = Error: No such container: 6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d
	* Nov 09 21:49:11 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:11.686393    1076 kuberuntime_manager.go:871] getPodContainerStatuses for pod "storage-provisioner_kube-system(18d7b974-22d5-11eb-bca7-0242456c8bb3)" failed: rpc error: code = Unknown desc = Error: No such container: 6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d
	* Nov 09 21:49:11 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:11.686434    1076 generic.go:247] PLEG: Ignoring events for pod storage-provisioner/kube-system: rpc error: code = Unknown desc = Error: No such container: 6271ad6709bbabbc79fd85986babe78231b36960c044baa26875373694b52a1d
	* Nov 09 21:49:12 old-k8s-version-20201109134552-342799 kubelet[1076]: W1109 21:49:12.461926    1076 pod_container_deletor.go:75] Container "c505f78f24db946b286eb25bc85c72c9018a30dcf4057653b47429bd38ad6d40" not found in pod's containers
	* Nov 09 21:49:12 old-k8s-version-20201109134552-342799 kubelet[1076]: W1109 21:49:12.480319    1076 pod_container_deletor.go:75] Container "2ed15a87982cabaaa6538e15459da05656ac7d5d54f029ac8f03acee2ac45fa7" not found in pod's containers
	* Nov 09 21:49:14 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:14.364172    1076 summary_sys_containers.go:45] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:49:14 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:14.364542    1076 helpers.go:735] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:49:14 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:14.878138    1076 kubelet_node_status.go:114] Node old-k8s-version-20201109134552-342799 was previously registered
	* Nov 09 21:49:14 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:14.878190    1076 kubelet_node_status.go:75] Successfully registered node old-k8s-version-20201109134552-342799
	* Nov 09 21:49:20 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:20.392169    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-95m6p" (UniqueName: "kubernetes.io/secret/6c190efb-22d5-11eb-bfda-02423a01d930-kubernetes-dashboard-token-95m6p") pod "kubernetes-dashboard-66766c77dc-f24f5" (UID: "6c190efb-22d5-11eb-bfda-02423a01d930")
	* Nov 09 21:49:20 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:20.392273    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/6c17ef17-22d5-11eb-bfda-02423a01d930-tmp-volume") pod "dashboard-metrics-scraper-7fc7ffbd75-kkdqv" (UID: "6c17ef17-22d5-11eb-bfda-02423a01d930")
	* Nov 09 21:49:20 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:20.392319    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-95m6p" (UniqueName: "kubernetes.io/secret/6c17ef17-22d5-11eb-bfda-02423a01d930-kubernetes-dashboard-token-95m6p") pod "dashboard-metrics-scraper-7fc7ffbd75-kkdqv" (UID: "6c17ef17-22d5-11eb-bfda-02423a01d930")
	* Nov 09 21:49:20 old-k8s-version-20201109134552-342799 kubelet[1076]: I1109 21:49:20.392351    1076 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/6c190efb-22d5-11eb-bfda-02423a01d930-tmp-volume") pod "kubernetes-dashboard-66766c77dc-f24f5" (UID: "6c190efb-22d5-11eb-bfda-02423a01d930")
	* Nov 09 21:49:21 old-k8s-version-20201109134552-342799 kubelet[1076]: W1109 21:49:21.158772    1076 pod_container_deletor.go:75] Container "386cd3ccf8164f1e29f55682e3b5b3f5d0a7a9c625255302e6f97ef9c3b43ad9" not found in pod's containers
	* Nov 09 21:49:21 old-k8s-version-20201109134552-342799 kubelet[1076]: W1109 21:49:21.208286    1076 pod_container_deletor.go:75] Container "c1446a446bcab28078a7e963fe9f0d8ad1c71e3d0e8277b1874d2ba98ccf1f58" not found in pod's containers
	* Nov 09 21:49:24 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:24.390525    1076 summary_sys_containers.go:45] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:49:24 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:24.390599    1076 helpers.go:735] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:49:34 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:34.414226    1076 summary_sys_containers.go:45] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:49:34 old-k8s-version-20201109134552-342799 kubelet[1076]: E1109 21:49:34.414274    1076 helpers.go:735] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* 
	* ==> kubernetes-dashboard [c38f07639a34] <==
	* 2020/11/09 21:49:21 Starting overwatch
	* 2020/11/09 21:49:21 Using namespace: kubernetes-dashboard
	* 2020/11/09 21:49:21 Using in-cluster config to connect to apiserver
	* 2020/11/09 21:49:21 Using secret token for csrf signing
	* 2020/11/09 21:49:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	* 2020/11/09 21:49:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	* 2020/11/09 21:49:21 Successful initial request to the apiserver, version: v1.13.0
	* 2020/11/09 21:49:21 Generating JWE encryption key
	* 2020/11/09 21:49:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	* 2020/11/09 21:49:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	* 2020/11/09 21:49:21 Initializing JWE encryption key from synchronized object
	* 2020/11/09 21:49:21 Creating in-cluster Sidecar client
	* 2020/11/09 21:49:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	* 2020/11/09 21:49:21 Serving insecurely on HTTP port: 9090
	* 
	* ==> storage-provisioner [6271ad6709bb] <==
	* 
	* ==> storage-provisioner [b3aa2f145af5] <==
	* I1109 21:47:40.011460       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1109 21:47:57.414254       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1109 21:47:57.414415       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20201109134552-342799_a4fa827a-e551-4859-a5e9-cfcfad954e71!
	* I1109 21:47:57.414466       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"18d34421-22d5-11eb-bca7-0242456c8bb3", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20201109134552-342799_a4fa827a-e551-4859-a5e9-cfcfad954e71 became leader
	* I1109 21:47:57.514778       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20201109134552-342799_a4fa827a-e551-4859-a5e9-cfcfad954e71!

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799
helpers_test.go:255: (dbg) Run:  kubectl --context old-k8s-version-20201109134552-342799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context old-k8s-version-20201109134552-342799 describe pod 
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context old-k8s-version-20201109134552-342799 describe pod : exit status 1 (82.135033ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context old-k8s-version-20201109134552-342799 describe pod : exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (9.70s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/VerifyKubernetesImages (8.03s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p crio-20201109134622-342799 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: kindest/kindnetd:0.5.4
start_stop_delete_test.go:232: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201109132758-342799
start_stop_delete_test.go:232: v1.15.7 images mismatch (-want +got):
[]string{
- 	"docker.io/kubernetesui/dashboard:v2.0.3",
- 	"docker.io/kubernetesui/metrics-scraper:v1.0.4",
	"gcr.io/k8s-minikube/storage-provisioner:v3",
	"k8s.gcr.io/coredns:1.3.1",
	... // 4 identical elements
	"k8s.gcr.io/kube-scheduler:v1.15.7",
	"k8s.gcr.io/pause:3.1",
+ 	"kubernetesui/dashboard:v2.0.3",
+ 	"kubernetesui/metrics-scraper:v1.0.4",
}
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/crio/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect crio-20201109134622-342799
helpers_test.go:229: (dbg) docker inspect crio-20201109134622-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc",
	        "Created": "2020-11-09T21:46:25.088398261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 561124,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:50:25.404435082Z",
	            "FinishedAt": "2020-11-09T21:50:02.674640676Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc/hosts",
	        "LogPath": "/var/lib/docker/containers/53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc/53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc-json.log",
	        "Name": "/crio-20201109134622-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "crio-20201109134622-342799:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "crio-20201109134622-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bcb35dc41ffa0fca1037196f1e0324b9e4c4e8f0245b350bb2ef281580ee2585-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bcb35dc41ffa0fca1037196f1e0324b9e4c4e8f0245b350bb2ef281580ee2585/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bcb35dc41ffa0fca1037196f1e0324b9e4c4e8f0245b350bb2ef281580ee2585/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bcb35dc41ffa0fca1037196f1e0324b9e4c4e8f0245b350bb2ef281580ee2585/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "crio-20201109134622-342799",
	                "Source": "/var/lib/docker/volumes/crio-20201109134622-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "crio-20201109134622-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "crio-20201109134622-342799",
	                "name.minikube.sigs.k8s.io": "crio-20201109134622-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13b9d275165168663b835e9d5d8d8558d8896b9ced8541731d3ddafb0f009d96",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/13b9d2751651",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "crio-20201109134622-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "53765626b7a1"
	                    ],
	                    "NetworkID": "0d123abca8e29edd152bc5f0dba276d287d6db64da00a592215f49e48a6ff594",
	                    "EndpointID": "d2aa82c00c75acd80aa40fed88c56ea6f7ba4667c62d7e808fa5a6c382a27977",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201109134622-342799 -n crio-20201109134622-342799
helpers_test.go:238: <<< TestStartStop/group/crio/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/crio/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p crio-20201109134622-342799 logs -n 25
helpers_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 -p crio-20201109134622-342799 logs -n 25: exit status 110 (2.673659973s)

                                                
                                                
-- stdout --
	* ==> CRI-O <==
	* -- Logs begin at Mon 2020-11-09 21:50:25 UTC, end at Mon 2020-11-09 21:51:13 UTC. --
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.141241826Z" level=error msg="Failed to update container state for 6a9ffdf3d109926ad93745d5b40f9025becf65371b211c619e87b509bcb4829f: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"6a9ffdf3d109926ad93745d5b40f9025becf65371b211c619e87b509bcb4829f\\\" does not exist\"\ncontainer \"6a9ffdf3d109926ad93745d5b40f9025becf65371b211c619e87b509bcb4829f\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.156050272Z" level=error msg="Failed to update container state for 8088e396f476344930ba74cd357f9a5ba5ea8baa0c560fbd8f657dbdac6efa6d: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"8088e396f476344930ba74cd357f9a5ba5ea8baa0c560fbd8f657dbdac6efa6d\\\" does not exist\"\ncontainer \"8088e396f476344930ba74cd357f9a5ba5ea8baa0c560fbd8f657dbdac6efa6d\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.169675012Z" level=error msg="Failed to update container state for 53e7311251b35d0669b1425683900bbd0d895a04628ae2160fac2909f1d44144: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"53e7311251b35d0669b1425683900bbd0d895a04628ae2160fac2909f1d44144\\\" does not exist\"\ncontainer \"53e7311251b35d0669b1425683900bbd0d895a04628ae2160fac2909f1d44144\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.182431306Z" level=error msg="Failed to update container state for ecc5233f346a9911101335f7ece030603228cf8e52874a74d2747bd258e121c9: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"ecc5233f346a9911101335f7ece030603228cf8e52874a74d2747bd258e121c9\\\" does not exist\"\ncontainer \"ecc5233f346a9911101335f7ece030603228cf8e52874a74d2747bd258e121c9\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.195135428Z" level=error msg="Failed to update container state for 954641fb62a5df6091abcf43cd3d7f8a28f354523f1f5c3379639d367932df42: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"954641fb62a5df6091abcf43cd3d7f8a28f354523f1f5c3379639d367932df42\\\" does not exist\"\ncontainer \"954641fb62a5df6091abcf43cd3d7f8a28f354523f1f5c3379639d367932df42\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.206085720Z" level=error msg="Failed to update container state for f9d705b3e01f96055ac6a99a78db798454ad08d51b30f585ad4a38d4e9009787: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"f9d705b3e01f96055ac6a99a78db798454ad08d51b30f585ad4a38d4e9009787\\\" does not exist\"\ncontainer \"f9d705b3e01f96055ac6a99a78db798454ad08d51b30f585ad4a38d4e9009787\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.217595960Z" level=error msg="Failed to update container state for 0535df8d071879c8406f331b9f4409352432de9cc58d427b7dfce4793a673601: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"0535df8d071879c8406f331b9f4409352432de9cc58d427b7dfce4793a673601\\\" does not exist\"\ncontainer \"0535df8d071879c8406f331b9f4409352432de9cc58d427b7dfce4793a673601\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.230683459Z" level=error msg="Failed to update container state for ce445f81c06114e1d9f61dd7f989f39d53b9b0a7ef1d708d62e7fcc8f8d9329c: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"ce445f81c06114e1d9f61dd7f989f39d53b9b0a7ef1d708d62e7fcc8f8d9329c\\\" does not exist\"\ncontainer \"ce445f81c06114e1d9f61dd7f989f39d53b9b0a7ef1d708d62e7fcc8f8d9329c\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.244719045Z" level=error msg="Failed to update container state for 027e4e74e2d9e5c599d18c10b2aada6f44fd57270052cfa8c2697228dc2c6568: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"027e4e74e2d9e5c599d18c10b2aada6f44fd57270052cfa8c2697228dc2c6568\\\" does not exist\"\ncontainer \"027e4e74e2d9e5c599d18c10b2aada6f44fd57270052cfa8c2697228dc2c6568\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.257169178Z" level=error msg="Failed to update container state for 0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66\\\" does not exist\"\ncontainer \"0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.269071075Z" level=error msg="Failed to update container state for 2eb925a38dde3615c382fc88effc40690917d8bee0ee4f89aa920834a6432ffa: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"2eb925a38dde3615c382fc88effc40690917d8bee0ee4f89aa920834a6432ffa\\\" does not exist\"\ncontainer \"2eb925a38dde3615c382fc88effc40690917d8bee0ee4f89aa920834a6432ffa\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.281687309Z" level=error msg="Failed to update container state for fa2190fa1040ee01540dbc6246afa3ed28278757193f46497740f98f208305f0: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"fa2190fa1040ee01540dbc6246afa3ed28278757193f46497740f98f208305f0\\\" does not exist\"\ncontainer \"fa2190fa1040ee01540dbc6246afa3ed28278757193f46497740f98f208305f0\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.294324905Z" level=error msg="Failed to update container state for b11a67a0f8782086f120e3966570686c284c001eea6d1baa75359d8d3192e065: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"b11a67a0f8782086f120e3966570686c284c001eea6d1baa75359d8d3192e065\\\" does not exist\"\ncontainer \"b11a67a0f8782086f120e3966570686c284c001eea6d1baa75359d8d3192e065\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.294893198Z" level=error msg="Error checking loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.294948513Z" level=error msg="Error checking loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:31 crio-20201109134622-342799 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	* Nov 09 21:50:43 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:43.138273225Z" level=error msg="Ignoring error tearing down loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:43 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:43.498840854Z" level=error msg="Ignoring error tearing down loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 systemd[1]: Stopping Container Runtime Interface for OCI (CRI-O)...
	* Nov 09 21:50:46 crio-20201109134622-342799 systemd[1]: crio.service: Succeeded.
	* Nov 09 21:50:46 crio-20201109134622-342799 systemd[1]: Stopped Container Runtime Interface for OCI (CRI-O).
	* Nov 09 21:50:46 crio-20201109134622-342799 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
	* Nov 09 21:50:47 crio-20201109134622-342799 crio[3240]: time="2020-11-09 21:50:47.146967134Z" level=error msg="Error checking loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:47 crio-20201109134622-342799 crio[3240]: time="2020-11-09 21:50:47.147067826Z" level=error msg="Error checking loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:47 crio-20201109134622-342799 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                        ATTEMPT             POD ID
	* c835dd6c9df84       86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4                                     14 seconds ago       Running             dashboard-metrics-scraper   0                   8694b6bbf6fe4
	* a2385206f4e1f       503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2                                     14 seconds ago       Running             kubernetes-dashboard        0                   4b44a8728c5aa
	* 9c904ee6009b9       2186a1a396deb58f1ea5eaf20193a518ca05049b46ccd754ec83366b5c8c13d5                                     28 seconds ago       Running             kindnet-cni                 1                   241f57c295767
	* 2bbdc97480d6c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                     28 seconds ago       Running             busybox                     1                   932bf95124b13
	* 32c570294975d       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c                                     29 seconds ago       Running             coredns                     1                   22865c95bec02
	* f089f0193b59f       ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f                                     29 seconds ago       Running             kube-proxy                  1                   fbd549c871e59
	* 123ca3d133b5a       bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289                                     29 seconds ago       Running             storage-provisioner         2                   fb74407ee4dd6
	* 35d3262f3358b       78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367                                     37 seconds ago       Running             kube-scheduler              1                   5fbb4926d94f5
	* 924ace492bcb4       c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264                                     37 seconds ago       Running             kube-apiserver              1                   27a2f0b2d5892
	* 6b2af65166e1e       d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2                                     37 seconds ago       Running             kube-controller-manager     0                   158c81fa54c66
	* 55a92deeac60b       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d                                     38 seconds ago       Running             etcd                        1                   f1c3b3e0dc6c7
	* 954641fb62a5d       docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998    About a minute ago   Exited              busybox                     0                   7cc86efc09350
	* 0f1d181a51923       bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289                                     2 minutes ago        Exited              storage-provisioner         1                   b8d41b11b6a8b
	* 027e4e74e2d9e       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c                                     2 minutes ago        Exited              coredns                     0                   53e7311251b35
	* ce445f81c0611       docker.io/kindest/kindnetd@sha256:46e34ccb3e08557767b7c80e957741d9f2590968ff32646875632d40cf62adad   2 minutes ago        Exited              kindnet-cni                 0                   f669579f75dba
	* b11a67a0f8782       ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f                                     3 minutes ago        Exited              kube-proxy                  0                   608d48dcacece
	* fa2190fa1040e       c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264                                     3 minutes ago        Exited              kube-apiserver              0                   16b3f4010c0a5
	* 2eb925a38dde3       78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367                                     3 minutes ago        Exited              kube-scheduler              0                   6a9ffdf3d1099
	* 0535df8d07187       d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2                                     3 minutes ago        Exited              kube-controller-manager     0                   8088e396f4763
	* f9d705b3e01f9       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d                                     3 minutes ago        Exited              etcd                        0                   ecc5233f346a9
	* 
	* ==> coredns [027e4e74e2d9e5c599d18c10b2aada6f44fd57270052cfa8c2697228dc2c6568] <==
	* .:53
	* 2020-11-09T21:48:34.504Z [INFO] CoreDNS-1.3.1
	* 2020-11-09T21:48:34.504Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	* CoreDNS-1.3.1
	* linux/amd64, go1.11.4, 6b56a9c
	* 2020-11-09T21:48:34.504Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
	* [INFO] SIGTERM: Shutting down servers then terminating
	* 
	* ==> coredns [32c570294975d3d7dfd7878495a9dd37ec0ea661ca68fed714a34d76a0c12dfa] <==
	* .:53
	* 2020-11-09T21:50:49.789Z [INFO] CoreDNS-1.3.1
	* 2020-11-09T21:50:49.789Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	* CoreDNS-1.3.1
	* linux/amd64, go1.11.4, 6b56a9c
	* 2020-11-09T21:50:49.789Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
	* 
	* ==> describe nodes <==
	* Name:               crio-20201109134622-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=crio-20201109134622-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=crio-20201109134622-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_47_53_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:47:48 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:50:42 +0000   Mon, 09 Nov 2020 21:47:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:50:42 +0000   Mon, 09 Nov 2020 21:47:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:50:42 +0000   Mon, 09 Nov 2020 21:47:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:50:42 +0000   Mon, 09 Nov 2020 21:47:43 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.70.16
	*   Hostname:    crio-20201109134622-342799
	* Capacity:
	*  cpu:                8
	*  ephemeral-storage:  515928484Ki
	*  hugepages-1Gi:      0
	*  hugepages-2Mi:      0
	*  memory:             30887000Ki
	*  pods:               110
	* Allocatable:
	*  cpu:                8
	*  ephemeral-storage:  515928484Ki
	*  hugepages-1Gi:      0
	*  hugepages-2Mi:      0
	*  memory:             30887000Ki
	*  pods:               110
	* System Info:
	*  Machine ID:                 2fa66088a34e46d282e6f7e772ab4aea
	*  System UUID:                16c119b5-8489-44d3-86c7-7f1b70fd0010
	*  Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*  Kernel Version:             4.9.0-14-amd64
	*  OS Image:                   Ubuntu 20.04.1 LTS
	*  Operating System:           linux
	*  Architecture:               amd64
	*  Container Runtime Version:  cri-o://1.18.3
	*  Kubelet Version:            v1.15.7
	*  Kube-Proxy Version:         v1.15.7
	* PodCIDR:                     10.244.0.0/24
	* Non-terminated Pods:         (11 in total)
	*   Namespace                  Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                  ----                                                  ------------  ----------  ---------------  -------------  ---
	*   default                    busybox                                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	*   kube-system                coredns-5d4dd4b4db-wczxf                              100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m7s
	*   kube-system                etcd-crio-20201109134622-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	*   kube-system                kindnet-n7x9d                                         100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m7s
	*   kube-system                kube-apiserver-crio-20201109134622-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	*   kube-system                kube-controller-manager-crio-20201109134622-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	*   kube-system                kube-proxy-q5gpm                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	*   kube-system                kube-scheduler-crio-20201109134622-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	*   kube-system                storage-provisioner                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	*   kubernetes-dashboard       dashboard-metrics-scraper-c8b69c96c-d6s6b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	*   kubernetes-dashboard       kubernetes-dashboard-5ddb79bb9f-ghs7v                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (9%)   100m (1%)
	*   memory             120Mi (0%)  220Mi (0%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                    From                                    Message
	*   ----    ------                   ----                   ----                                    -------
	*   Normal  NodeHasSufficientMemory  3m34s (x7 over 3m34s)  kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    3m34s (x8 over 3m34s)  kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     3m34s (x8 over 3m34s)  kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 3m5s                   kube-proxy, crio-20201109134622-342799  Starting kube-proxy.
	*   Normal  Starting                 40s                    kubelet, crio-20201109134622-342799     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  40s                    kubelet, crio-20201109134622-342799     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientMemory  39s (x8 over 40s)      kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    39s (x8 over 40s)      kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     39s (x7 over 40s)      kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 29s                    kube-proxy, crio-20201109134622-342799  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000002] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +4.031819] net_ratelimit: 1 callbacks suppressed
	* [  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000003] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000001] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +0.000028] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000003] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +6.213984] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 32 61 1b c2 2c d7 08 06        ......2a..,...
	* [  +0.000004] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	* [  +0.000001] ll header: 00000000: ff ff ff ff ff ff 32 61 1b c2 2c d7 08 06        ......2a..,...
	* [  +0.555222] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vethab3a4d18
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff da b2 d2 e5 1b 80 08 06        ..............
	* [  +0.032176] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethde50f238
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 4a 94 51 4e 3f 5e 08 06        ......J.QN?^..
	* [  +1.390019] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000004] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000001] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +0.003818] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000004] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [Nov 9 21:51] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth30a282a7
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff f2 42 0b 96 2c 5b 08 06        .......B..,[..
	* 
	* ==> etcd [55a92deeac60bdca4d5cac0d0d2beae33d2cde265c9a526e7545802eb78acba9] <==
	* 2020-11-09 21:50:38.063263 I | embed: serving client requests on 192.168.70.16:2379
	* 2020-11-09 21:50:38.064215 I | embed: serving client requests on 127.0.0.1:2379
	* proto: no coders for int
	* proto: no encoder for ValueSize int [GetProperties]
	* 2020-11-09 21:50:59.570967 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:3432" took too long (125.850046ms) to execute
	* 2020-11-09 21:50:59.571113 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-5d4dd4b4db-wczxf.1645f5635df031c3\" " with result "range_response_count:1 size:515" took too long (214.824381ms) to execute
	* 2020-11-09 21:51:06.749891 W | wal: sync duration of 2.375672244s, expected less than 1s
	* 2020-11-09 21:51:07.036992 W | etcdserver: request "header:<ID:11492471218553520308 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.70.16\" mod_revision:588 > success:<request_put:<key:\"/registry/masterleases/192.168.70.16\" value_size:68 lease:2269099181698744498 >> failure:<request_range:<key:\"/registry/masterleases/192.168.70.16\" > >>" with result "size:16" took too long (286.762387ms) to execute
	* 2020-11-09 21:51:07.037173 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:3508" took too long (2.592072771s) to execute
	* 2020-11-09 21:51:07.038059 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3834" took too long (1.174293488s) to execute
	* 2020-11-09 21:51:09.509225 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (2.353197172s) to execute
	* 2020-11-09 21:51:09.509280 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-5d4dd4b4db-wczxf.1645f5635df031c3\" " with result "range_response_count:1 size:515" took too long (153.034025ms) to execute
	* 2020-11-09 21:51:09.509365 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:5" took too long (1.944283324s) to execute
	* 2020-11-09 21:51:09.509413 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:3508" took too long (1.460401622s) to execute
	* 2020-11-09 21:51:11.166444 W | wal: sync duration of 1.647205331s, expected less than 1s
	* 2020-11-09 21:51:12.260678 W | wal: sync duration of 1.093831389s, expected less than 1s
	* 2020-11-09 21:51:12.260858 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (2.747932882s) to execute
	* 2020-11-09 21:51:12.261027 W | etcdserver: request "header:<ID:11492471218553520319 > lease_revoke:<id:1f7d75aefd488042>" with result "size:29" took too long (1.094293392s) to execute
	* 2020-11-09 21:51:12.261107 W | etcdserver: failed to revoke 1f7d75aefd488042 ("lease not found")
	* 2020-11-09 21:51:12.261128 W | etcdserver: failed to revoke 1f7d75aefd488042 ("lease not found")
	* 2020-11-09 21:51:12.261138 W | etcdserver: failed to revoke 1f7d75aefd488042 ("lease not found")
	* 2020-11-09 21:51:12.261216 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:3508" took too long (2.212400075s) to execute
	* 2020-11-09 21:51:12.261295 W | etcdserver: failed to revoke 1f7d75aefd488042 ("lease not found")
	* 2020-11-09 21:51:12.261387 W | etcdserver: failed to revoke 1f7d75aefd488042 ("lease not found")
	* 2020-11-09 21:51:12.261436 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.350224788s) to execute
	* 
	* ==> etcd [f9d705b3e01f96055ac6a99a78db798454ad08d51b30f585ad4a38d4e9009787] <==
	* 2020-11-09 21:48:59.362373 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3588" took too long (12.989808712s) to execute
	* 2020-11-09 21:48:59.362490 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (11.172934921s) to execute
	* 2020-11-09 21:48:59.362605 W | etcdserver: read-only range request "key:\"/registry/volumeattachments\" range_end:\"/registry/volumeattachmentt\" count_only:true " with result "range_response_count:0 size:5" took too long (11.959868192s) to execute
	* 2020-11-09 21:48:59.362745 W | etcdserver: read-only range request "key:\"/registry/clusterroles\" range_end:\"/registry/clusterrolet\" count_only:true " with result "range_response_count:0 size:7" took too long (12.099491944s) to execute
	* 2020-11-09 21:48:59.363057 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.363076 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.363099 W | etcdserver: read-only range request "key:\"/registry/statefulsets\" range_end:\"/registry/statefulsett\" count_only:true " with result "range_response_count:0 size:5" took too long (13.858083946s) to execute
	* 2020-11-09 21:48:59.363212 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.363225 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.363233 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (13.782526299s) to execute
	* 2020-11-09 21:48:59.363241 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.363376 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.380873 W | etcdserver: read-only range request "key:\"/registry/cronjobs\" range_end:\"/registry/cronjobt\" count_only:true " with result "range_response_count:0 size:5" took too long (5.465839894s) to execute
	* 2020-11-09 21:48:59.381157 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (4.732427745s) to execute
	* 2020-11-09 21:48:59.382929 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:4 size:7989" took too long (4.874824141s) to execute
	* 2020-11-09 21:48:59.383155 W | etcdserver: read-only range request "key:\"/registry/jobs\" range_end:\"/registry/jobt\" count_only:true " with result "range_response_count:0 size:5" took too long (4.5310944s) to execute
	* 2020-11-09 21:48:59.383878 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (2.478789707s) to execute
	* 2020-11-09 21:48:59.384069 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (1.447378192s) to execute
	* 2020-11-09 21:48:59.384238 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (1.502418532s) to execute
	* 2020-11-09 21:48:59.384782 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (6.24410582s) to execute
	* 2020-11-09 21:48:59.385009 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (5.589339871s) to execute
	* 2020-11-09 21:48:59.385819 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (6.48034383s) to execute
	* 2020-11-09 21:48:59.386299 W | etcdserver: read-only range request "key:\"/registry/minions/crio-20201109134622-342799\" " with result "range_response_count:1 size:3588" took too long (3.644987743s) to execute
	* 2020-11-09 21:48:59.387369 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:5" took too long (4.418420626s) to execute
	* 2020-11-09 21:48:59.388668 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/crio-20201109134622-342799\" " with result "range_response_count:1 size:360" took too long (854.628936ms) to execute
	* 
	* ==> kernel <==
	*  21:51:14 up  1:33,  0 users,  load average: 10.13, 10.05, 8.49
	* Linux crio-20201109134622-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [924ace492bcb448471ddfc805e99b498dbe7486ebf84c77aa690645ffa77bca4] <==
	* Trace[851991797]: [1.175301273s] [1.175301273s] END
	* I1109 21:51:07.038922       1 trace.go:81] Trace[1587229739]: "List etcd3: key=/pods/kubernetes-dashboard, resourceVersion=, limit: 0, continue: " (started: 2020-11-09 21:51:04.444516952 +0000 UTC m=+28.070536930) (total time: 2.594359882s):
	* Trace[1587229739]: [2.594359882s] [2.594359882s] END
	* I1109 21:51:07.039048       1 trace.go:81] Trace[1394165715]: "List /api/v1/nodes" (started: 2020-11-09 21:51:05.863207228 +0000 UTC m=+29.489227171) (total time: 1.17581666s):
	* Trace[1394165715]: [1.175397118s] [1.175387281s] Listing from storage done
	* I1109 21:51:07.039173       1 trace.go:81] Trace[1346543781]: "List /api/v1/namespaces/kubernetes-dashboard/pods" (started: 2020-11-09 21:51:04.444358263 +0000 UTC m=+28.070378211) (total time: 2.594795249s):
	* Trace[1346543781]: [2.5945764s] [2.594434015s] Listing from storage done
	* I1109 21:51:07.039286       1 trace.go:81] Trace[1638636632]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2020-11-09 21:51:04.37258274 +0000 UTC m=+27.998602696) (total time: 2.666665207s):
	* Trace[1638636632]: [2.666644857s] [2.664233405s] Transaction committed
	* I1109 21:51:09.510092       1 trace.go:81] Trace[1741058939]: "List etcd3: key=/pods/kubernetes-dashboard, resourceVersion=, limit: 0, continue: " (started: 2020-11-09 21:51:08.04844751 +0000 UTC m=+31.674467525) (total time: 1.461604973s):
	* Trace[1741058939]: [1.461604973s] [1.461604973s] END
	* I1109 21:51:09.510301       1 trace.go:81] Trace[2074990001]: "List /api/v1/namespaces/kubernetes-dashboard/pods" (started: 2020-11-09 21:51:08.048293442 +0000 UTC m=+31.674313424) (total time: 1.461994642s):
	* Trace[2074990001]: [1.461824299s] [1.461684409s] Listing from storage done
	* I1109 21:51:09.510828       1 trace.go:81] Trace[368736186]: "List etcd3: key=/jobs, resourceVersion=, limit: 500, continue: " (started: 2020-11-09 21:51:07.155371944 +0000 UTC m=+30.781391892) (total time: 2.355426182s):
	* Trace[368736186]: [2.355426182s] [2.355426182s] END
	* I1109 21:51:09.510912       1 trace.go:81] Trace[2041738141]: "List /apis/batch/v1/jobs" (started: 2020-11-09 21:51:07.155286386 +0000 UTC m=+30.781306333) (total time: 2.355610802s):
	* Trace[2041738141]: [2.355556621s] [2.355482461s] Listing from storage done
	* I1109 21:51:12.261414       1 trace.go:81] Trace[560758291]: "List etcd3: key=/cronjobs, resourceVersion=, limit: 500, continue: " (started: 2020-11-09 21:51:09.512487538 +0000 UTC m=+33.138507484) (total time: 2.748879777s):
	* Trace[560758291]: [2.748879777s] [2.748879777s] END
	* I1109 21:51:12.261532       1 trace.go:81] Trace[954873413]: "List /apis/batch/v1beta1/cronjobs" (started: 2020-11-09 21:51:09.512403006 +0000 UTC m=+33.138422959) (total time: 2.749115025s):
	* Trace[954873413]: [2.749032942s] [2.748958744s] Listing from storage done
	* I1109 21:51:12.262732       1 trace.go:81] Trace[2049262841]: "List etcd3: key=/pods/kubernetes-dashboard, resourceVersion=, limit: 0, continue: " (started: 2020-11-09 21:51:10.048312248 +0000 UTC m=+33.674332200) (total time: 2.214379731s):
	* Trace[2049262841]: [2.214379731s] [2.214379731s] END
	* I1109 21:51:12.263165       1 trace.go:81] Trace[173883954]: "List /api/v1/namespaces/kubernetes-dashboard/pods" (started: 2020-11-09 21:51:10.048231038 +0000 UTC m=+33.674251069) (total time: 2.214905975s):
	* Trace[173883954]: [2.21451847s] [2.214449996s] Listing from storage done
	* 
	* ==> kube-apiserver [fa2190fa1040ee01540dbc6246afa3ed28278757193f46497740f98f208305f0] <==
	* I1109 21:49:23.491836       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:23.491914       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:23.501725       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:23.959323       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	* I1109 21:49:23.959694       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:23.959838       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:23.970037       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:25.035675       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	* I1109 21:49:25.036146       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:25.036515       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:25.048920       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.491859       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	* I1109 21:49:43.492096       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.492243       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.503257       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.959493       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	* I1109 21:49:43.959712       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.959844       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.970821       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:45.035849       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	* I1109 21:49:45.036058       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:45.036170       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:45.036202       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:45.036268       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:45.047481       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* 
	* ==> kube-controller-manager [0535df8d071879c8406f331b9f4409352432de9cc58d427b7dfce4793a673601] <==
	* W1109 21:48:07.206493       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="crio-20201109134622-342799" does not exist
	* I1109 21:48:07.207451       1 controller_utils.go:1036] Caches are synced for taint controller
	* I1109 21:48:07.207605       1 node_lifecycle_controller.go:1189] Initializing eviction metric for zone: 
	* W1109 21:48:07.207719       1 node_lifecycle_controller.go:863] Missing timestamp for Node crio-20201109134622-342799. Assuming now as a timestamp.
	* I1109 21:48:07.207787       1 node_lifecycle_controller.go:1089] Controller detected that zone  is now in state Normal.
	* I1109 21:48:07.208014       1 taint_manager.go:182] Starting NoExecuteTaintManager
	* I1109 21:48:07.208221       1 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crio-20201109134622-342799", UID:"ebe90bde-49f9-4fdc-a837-98c969cfa070", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node crio-20201109134622-342799 event: Registered Node crio-20201109134622-342799 in Controller
	* I1109 21:48:07.219638       1 controller_utils.go:1036] Caches are synced for TTL controller
	* I1109 21:48:07.267508       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* I1109 21:48:07.267539       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:48:07.288637       1 controller_utils.go:1036] Caches are synced for attach detach controller
	* I1109 21:48:07.288939       1 controller_utils.go:1036] Caches are synced for daemon sets controller
	* I1109 21:48:07.293240       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"879c1af6-b019-4309-8790-68862df641bc", APIVersion:"apps/v1", ResourceVersion:"339", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-5d4dd4b4db to 1
	* I1109 21:48:07.295297       1 controller_utils.go:1036] Caches are synced for persistent volume controller
	* I1109 21:48:07.301396       1 controller_utils.go:1036] Caches are synced for node controller
	* I1109 21:48:07.301431       1 range_allocator.go:157] Starting range CIDR allocator
	* I1109 21:48:07.301457       1 controller_utils.go:1029] Waiting for caches to sync for cidrallocator controller
	* I1109 21:48:07.357755       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* I1109 21:48:07.357770       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* I1109 21:48:07.357769       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* I1109 21:48:07.370941       1 event.go:258] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"6eb71746-9284-4ef3-a4e6-c6eef2f9905c", APIVersion:"apps/v1", ResourceVersion:"223", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-n7x9d
	* I1109 21:48:07.371494       1 event.go:258] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"e1c3926b-aedc-4ba1-a531-89e830ed4064", APIVersion:"apps/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-q5gpm
	* I1109 21:48:07.374382       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5d4dd4b4db", UID:"c6d65372-6830-47ea-aafa-45e55da8d5f8", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-5d4dd4b4db-tfkqn
	* I1109 21:48:07.457581       1 controller_utils.go:1036] Caches are synced for cidrallocator controller
	* I1109 21:48:07.476734       1 range_allocator.go:310] Set node crio-20201109134622-342799 PodCIDR to 10.244.0.0/24
	* 
	* ==> kube-controller-manager [6b2af65166e1e030c3fcc83bca490d83cf9a98fb4405c0fe827ab5512a89bf51] <==
	* I1109 21:50:57.398200       1 controller_utils.go:1036] Caches are synced for GC controller
	* I1109 21:50:57.398307       1 controller_utils.go:1036] Caches are synced for certificate controller
	* I1109 21:50:57.402577       1 controller_utils.go:1036] Caches are synced for endpoint controller
	* I1109 21:50:57.419268       1 controller_utils.go:1036] Caches are synced for stateful set controller
	* I1109 21:50:57.445618       1 controller_utils.go:1036] Caches are synced for cidrallocator controller
	* I1109 21:50:57.479270       1 controller_utils.go:1036] Caches are synced for daemon sets controller
	* I1109 21:50:57.491488       1 controller_utils.go:1036] Caches are synced for bootstrap_signer controller
	* I1109 21:50:57.520600       1 controller_utils.go:1036] Caches are synced for attach detach controller
	* I1109 21:50:57.527898       1 controller_utils.go:1036] Caches are synced for persistent volume controller
	* I1109 21:50:57.568816       1 controller_utils.go:1036] Caches are synced for expand controller
	* I1109 21:50:57.600129       1 controller_utils.go:1036] Caches are synced for PV protection controller
	* I1109 21:50:57.892093       1 controller_utils.go:1036] Caches are synced for ReplicationController controller
	* I1109 21:50:57.908118       1 controller_utils.go:1036] Caches are synced for HPA controller
	* I1109 21:50:57.996544       1 controller_utils.go:1036] Caches are synced for disruption controller
	* I1109 21:50:57.996573       1 disruption.go:338] Sending events to api server.
	* I1109 21:50:57.998441       1 controller_utils.go:1036] Caches are synced for deployment controller
	* I1109 21:50:58.004625       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"f980d012-f11c-4249-b5ec-8b25258576cb", APIVersion:"apps/v1", ResourceVersion:"551", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5ddb79bb9f to 1
	* I1109 21:50:58.007326       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"201432ef-b23e-49c5-b8fe-024dbc5e4f13", APIVersion:"apps/v1", ResourceVersion:"550", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-c8b69c96c to 1
	* I1109 21:50:58.019549       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5ddb79bb9f", UID:"dc0f78f9-c86c-4cf3-b1d3-c2c4a111708f", APIVersion:"apps/v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5ddb79bb9f-ghs7v
	* I1109 21:50:58.024574       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-c8b69c96c", UID:"3fd911ab-ee27-4009-8693-0a72acfb94e6", APIVersion:"apps/v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-c8b69c96c-d6s6b
	* I1109 21:50:58.169138       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* I1109 21:50:58.169282       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:50:58.170677       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* I1109 21:50:58.198305       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* I1109 21:50:58.199356       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* 
	* ==> kube-proxy [b11a67a0f8782086f120e3966570686c284c001eea6d1baa75359d8d3192e065] <==
	* W1109 21:48:08.923186       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
	* I1109 21:48:08.935724       1 server_others.go:143] Using iptables Proxier.
	* I1109 21:48:08.936055       1 server.go:534] Version: v1.15.7
	* I1109 21:48:09.031928       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:48:09.032120       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:48:09.033028       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:48:09.034082       1 config.go:187] Starting service config controller
	* I1109 21:48:09.034150       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
	* I1109 21:48:09.034278       1 config.go:96] Starting endpoints config controller
	* I1109 21:48:09.034359       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
	* I1109 21:48:09.134539       1 controller_utils.go:1036] Caches are synced for service config controller
	* I1109 21:48:09.134568       1 controller_utils.go:1036] Caches are synced for endpoints config controller
	* 
	* ==> kube-proxy [f089f0193b59f54e6918af2c984f2810ae84282159d2183ec3136dc1fdf8a1a8] <==
	* W1109 21:50:45.040787       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
	* I1109 21:50:45.090320       1 server_others.go:143] Using iptables Proxier.
	* I1109 21:50:45.103508       1 server.go:534] Version: v1.15.7
	* I1109 21:50:45.112345       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:50:45.112536       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:50:45.112648       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:50:45.112827       1 config.go:96] Starting endpoints config controller
	* I1109 21:50:45.112857       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
	* I1109 21:50:45.112997       1 config.go:187] Starting service config controller
	* I1109 21:50:45.113017       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
	* I1109 21:50:45.213209       1 controller_utils.go:1036] Caches are synced for service config controller
	* I1109 21:50:45.213345       1 controller_utils.go:1036] Caches are synced for endpoints config controller
	* 
	* ==> kube-scheduler [2eb925a38dde3615c382fc88effc40690917d8bee0ee4f89aa920834a6432ffa] <==
	* W1109 21:47:44.683256       1 authentication.go:55] Authentication is disabled
	* I1109 21:47:44.683277       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	* I1109 21:47:44.684258       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	* E1109 21:47:48.080256       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:47:48.080463       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:47:48.080613       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:47:48.080644       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:47:48.080739       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:47:48.080809       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:47:48.080905       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:48.158424       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:47:48.160035       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:47:48.165650       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:47:49.082254       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:47:49.162497       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:47:49.165053       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:47:49.166309       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:47:49.167268       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:47:49.168337       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:47:49.169452       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:49.170868       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:47:49.172179       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:47:49.173066       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:48:07.184059       1 factory.go:702] pod is already present in the activeQ
	* E1109 21:48:07.260236       1 factory.go:702] pod is already present in the activeQ
	* 
	* ==> kube-scheduler [35d3262f3358bf474919114424809a44099bb6a9cd97c538fe57a06992abf607] <==
	* I1109 21:50:37.421083       1 serving.go:319] Generated self-signed cert in-memory
	* W1109 21:50:38.048191       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	* W1109 21:50:38.048228       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	* W1109 21:50:38.048244       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	* I1109 21:50:38.052337       1 server.go:142] Version: v1.15.7
	* I1109 21:50:38.052412       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	* W1109 21:50:38.054125       1 authorization.go:47] Authorization is disabled
	* W1109 21:50:38.054148       1 authentication.go:55] Authentication is disabled
	* I1109 21:50:38.054164       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	* I1109 21:50:38.054660       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	* E1109 21:50:42.571241       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:50:25 UTC, end at Mon 2020-11-09 21:51:15 UTC. --
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: I1109 21:50:46.138382     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000ade1a0, TRANSIENT_FAILURE
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: I1109 21:50:46.138387     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000936c0, CONNECTING
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: I1109 21:50:46.138398     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000936c0, TRANSIENT_FAILURE
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.635128     901 remote_runtime.go:182] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.635219     901 kuberuntime_sandbox.go:210] ListPodSandbox failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.635237     901 kubelet_pods.go:1043] Error listing containers: &status.statusError{Code:14, Message:"all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"", Details:[]*any.Any(nil)}
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.635279     901 kubelet.go:1977] Failed cleaning pods: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.924716     901 remote_runtime.go:182] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.924855     901 kuberuntime_sandbox.go:210] ListPodSandbox failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.924875     901 generic.go:205] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:47 crio-20201109134622-342799 kubelet[901]: I1109 21:50:47.138453     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000ade1a0, CONNECTING
	* Nov 09 21:50:47 crio-20201109134622-342799 kubelet[901]: I1109 21:50:47.138453     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000936c0, CONNECTING
	* Nov 09 21:50:47 crio-20201109134622-342799 kubelet[901]: I1109 21:50:47.138545     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000936c0, READY
	* Nov 09 21:50:47 crio-20201109134622-342799 kubelet[901]: I1109 21:50:47.138601     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000ade1a0, READY
	* Nov 09 21:50:55 crio-20201109134622-342799 kubelet[901]: E1109 21:50:55.027867     901 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods.slice": failed to get cgroup stats for "/kubepods.slice": failed to get container info for "/kubepods.slice": unknown container "/kubepods.slice"
	* Nov 09 21:50:55 crio-20201109134622-342799 kubelet[901]: E1109 21:50:55.027961     901 helpers.go:712] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:50:58 crio-20201109134622-342799 kubelet[901]: I1109 21:50:58.165484     901 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-m4km7" (UniqueName: "kubernetes.io/secret/4fab7cfe-a4f7-4fac-aa3e-f4ced316c49d-kubernetes-dashboard-token-m4km7") pod "kubernetes-dashboard-5ddb79bb9f-ghs7v" (UID: "4fab7cfe-a4f7-4fac-aa3e-f4ced316c49d")
	* Nov 09 21:50:58 crio-20201109134622-342799 kubelet[901]: I1109 21:50:58.165562     901 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1f4847a1-1cf8-4611-9454-1e266b2e40c7-tmp-volume") pod "dashboard-metrics-scraper-c8b69c96c-d6s6b" (UID: "1f4847a1-1cf8-4611-9454-1e266b2e40c7")
	* Nov 09 21:50:58 crio-20201109134622-342799 kubelet[901]: I1109 21:50:58.165652     901 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-m4km7" (UniqueName: "kubernetes.io/secret/1f4847a1-1cf8-4611-9454-1e266b2e40c7-kubernetes-dashboard-token-m4km7") pod "dashboard-metrics-scraper-c8b69c96c-d6s6b" (UID: "1f4847a1-1cf8-4611-9454-1e266b2e40c7")
	* Nov 09 21:50:58 crio-20201109134622-342799 kubelet[901]: I1109 21:50:58.165738     901 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/4fab7cfe-a4f7-4fac-aa3e-f4ced316c49d-tmp-volume") pod "kubernetes-dashboard-5ddb79bb9f-ghs7v" (UID: "4fab7cfe-a4f7-4fac-aa3e-f4ced316c49d")
	* Nov 09 21:51:05 crio-20201109134622-342799 kubelet[901]: E1109 21:51:05.038030     901 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods.slice": failed to get cgroup stats for "/kubepods.slice": failed to get container info for "/kubepods.slice": unknown container "/kubepods.slice"
	* Nov 09 21:51:05 crio-20201109134622-342799 kubelet[901]: E1109 21:51:05.038077     901 helpers.go:712] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:51:15 crio-20201109134622-342799 kubelet[901]: E1109 21:51:15.021720     901 pod_workers.go:190] Error syncing pod f185742f-0b73-4465-920b-555675a3559b ("storage-provisioner_kube-system(f185742f-0b73-4465-920b-555675a3559b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f185742f-0b73-4465-920b-555675a3559b)"
	* Nov 09 21:51:15 crio-20201109134622-342799 kubelet[901]: E1109 21:51:15.052135     901 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods.slice": failed to get cgroup stats for "/kubepods.slice": failed to get container info for "/kubepods.slice": unknown container "/kubepods.slice"
	* Nov 09 21:51:15 crio-20201109134622-342799 kubelet[901]: E1109 21:51:15.052180     901 helpers.go:712] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* 
	* ==> kubernetes-dashboard [a2385206f4e1fcc8de509bc48b88929c0f3527c5dafab15b5f18c5f7fe6170c8] <==
	* 2020/11/09 21:50:59 Using namespace: kubernetes-dashboard
	* 2020/11/09 21:50:59 Using in-cluster config to connect to apiserver
	* 2020/11/09 21:50:59 Using secret token for csrf signing
	* 2020/11/09 21:50:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	* 2020/11/09 21:50:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	* 2020/11/09 21:50:59 Successful initial request to the apiserver, version: v1.15.7
	* 2020/11/09 21:50:59 Generating JWE encryption key
	* 2020/11/09 21:50:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	* 2020/11/09 21:50:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	* 2020/11/09 21:51:00 Initializing JWE encryption key from synchronized object
	* 2020/11/09 21:51:00 Creating in-cluster Sidecar client
	* 2020/11/09 21:51:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	* 2020/11/09 21:51:00 Serving insecurely on HTTP port: 9090
	* 2020/11/09 21:50:59 Starting overwatch
	* 
	* ==> storage-provisioner [0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66] <==
	* 
	* ==> storage-provisioner [123ca3d133b5ab8f7f137c430ce09feeb423cebe0455ec9e3a44a77f3dccdfa6] <==
	* F1109 21:51:14.645854       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:51:14.646383  574851 out.go:286] unable to execute * 2020-11-09 21:51:07.036992 W | etcdserver: request "header:<ID:11492471218553520308 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.70.16\" mod_revision:588 > success:<request_put:<key:\"/registry/masterleases/192.168.70.16\" value_size:68 lease:2269099181698744498 >> failure:<request_range:<key:\"/registry/masterleases/192.168.70.16\" > >>" with result "size:16" took too long (286.762387ms) to execute
	: html/template:* 2020-11-09 21:51:07.036992 W | etcdserver: request "header:<ID:11492471218553520308 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.70.16\" mod_revision:588 > success:<request_put:<key:\"/registry/masterleases/192.168.70.16\" value_size:68 lease:2269099181698744498 >> failure:<request_range:<key:\"/registry/masterleases/192.168.70.16\" > >>" with result "size:16" took too long (286.762387ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:51:15.902800  574851 logs.go:181] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66": Process exited with status 1
	stdout:
	
	stderr:
	E1109 21:51:15.887204    4645 remote_runtime.go:295] ContainerStatus "0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66" from runtime service failed: rpc error: code = NotFound desc = could not find container "0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66": container with ID starting with 0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66 not found: ID does not exist
	time="2020-11-09T21:51:15Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66\": container with ID starting with 0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66 not found: ID does not exist"
	 output: "\n** stderr ** \nE1109 21:51:15.887204    4645 remote_runtime.go:295] ContainerStatus \"0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66\" from runtime service failed: rpc error: code = NotFound desc = could not find container \"0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66\": container with ID starting with 0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66 not found: ID does not exist\ntime=\"2020-11-09T21:51:15Z\" level=fatal msg=\"rpc error: code = NotFound desc = could not find container \\\"0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66\\\": container with ID starting with 0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66 not found: ID does not exist\"\n\n** /stderr **"
	! unable to fetch logs for: storage-provisioner [0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66]

                                                
                                                
** /stderr **
helpers_test.go:243: failed logs error: exit status 110
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/crio/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect crio-20201109134622-342799
helpers_test.go:229: (dbg) docker inspect crio-20201109134622-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc",
	        "Created": "2020-11-09T21:46:25.088398261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 561124,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:50:25.404435082Z",
	            "FinishedAt": "2020-11-09T21:50:02.674640676Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc/hosts",
	        "LogPath": "/var/lib/docker/containers/53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc/53765626b7a1aecfcce9eed5681d182b4cd6444865c6ca096af2113158afe4dc-json.log",
	        "Name": "/crio-20201109134622-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "crio-20201109134622-342799:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "crio-20201109134622-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bcb35dc41ffa0fca1037196f1e0324b9e4c4e8f0245b350bb2ef281580ee2585-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bcb35dc41ffa0fca1037196f1e0324b9e4c4e8f0245b350bb2ef281580ee2585/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bcb35dc41ffa0fca1037196f1e0324b9e4c4e8f0245b350bb2ef281580ee2585/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bcb35dc41ffa0fca1037196f1e0324b9e4c4e8f0245b350bb2ef281580ee2585/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "crio-20201109134622-342799",
	                "Source": "/var/lib/docker/volumes/crio-20201109134622-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "crio-20201109134622-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "crio-20201109134622-342799",
	                "name.minikube.sigs.k8s.io": "crio-20201109134622-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13b9d275165168663b835e9d5d8d8558d8896b9ced8541731d3ddafb0f009d96",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/13b9d2751651",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "crio-20201109134622-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "53765626b7a1"
	                    ],
	                    "NetworkID": "0d123abca8e29edd152bc5f0dba276d287d6db64da00a592215f49e48a6ff594",
	                    "EndpointID": "d2aa82c00c75acd80aa40fed88c56ea6f7ba4667c62d7e808fa5a6c382a27977",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201109134622-342799 -n crio-20201109134622-342799
helpers_test.go:238: <<< TestStartStop/group/crio/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/crio/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p crio-20201109134622-342799 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/VerifyKubernetesImages
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p crio-20201109134622-342799 logs -n 25: (2.89784861s)
helpers_test.go:246: TestStartStop/group/crio/serial/VerifyKubernetesImages logs: 
-- stdout --
	* ==> CRI-O <==
	* -- Logs begin at Mon 2020-11-09 21:50:25 UTC, end at Mon 2020-11-09 21:51:17 UTC. --
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.141241826Z" level=error msg="Failed to update container state for 6a9ffdf3d109926ad93745d5b40f9025becf65371b211c619e87b509bcb4829f: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"6a9ffdf3d109926ad93745d5b40f9025becf65371b211c619e87b509bcb4829f\\\" does not exist\"\ncontainer \"6a9ffdf3d109926ad93745d5b40f9025becf65371b211c619e87b509bcb4829f\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.156050272Z" level=error msg="Failed to update container state for 8088e396f476344930ba74cd357f9a5ba5ea8baa0c560fbd8f657dbdac6efa6d: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"8088e396f476344930ba74cd357f9a5ba5ea8baa0c560fbd8f657dbdac6efa6d\\\" does not exist\"\ncontainer \"8088e396f476344930ba74cd357f9a5ba5ea8baa0c560fbd8f657dbdac6efa6d\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.169675012Z" level=error msg="Failed to update container state for 53e7311251b35d0669b1425683900bbd0d895a04628ae2160fac2909f1d44144: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"53e7311251b35d0669b1425683900bbd0d895a04628ae2160fac2909f1d44144\\\" does not exist\"\ncontainer \"53e7311251b35d0669b1425683900bbd0d895a04628ae2160fac2909f1d44144\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.182431306Z" level=error msg="Failed to update container state for ecc5233f346a9911101335f7ece030603228cf8e52874a74d2747bd258e121c9: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"ecc5233f346a9911101335f7ece030603228cf8e52874a74d2747bd258e121c9\\\" does not exist\"\ncontainer \"ecc5233f346a9911101335f7ece030603228cf8e52874a74d2747bd258e121c9\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.195135428Z" level=error msg="Failed to update container state for 954641fb62a5df6091abcf43cd3d7f8a28f354523f1f5c3379639d367932df42: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"954641fb62a5df6091abcf43cd3d7f8a28f354523f1f5c3379639d367932df42\\\" does not exist\"\ncontainer \"954641fb62a5df6091abcf43cd3d7f8a28f354523f1f5c3379639d367932df42\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.206085720Z" level=error msg="Failed to update container state for f9d705b3e01f96055ac6a99a78db798454ad08d51b30f585ad4a38d4e9009787: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"f9d705b3e01f96055ac6a99a78db798454ad08d51b30f585ad4a38d4e9009787\\\" does not exist\"\ncontainer \"f9d705b3e01f96055ac6a99a78db798454ad08d51b30f585ad4a38d4e9009787\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.217595960Z" level=error msg="Failed to update container state for 0535df8d071879c8406f331b9f4409352432de9cc58d427b7dfce4793a673601: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"0535df8d071879c8406f331b9f4409352432de9cc58d427b7dfce4793a673601\\\" does not exist\"\ncontainer \"0535df8d071879c8406f331b9f4409352432de9cc58d427b7dfce4793a673601\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.230683459Z" level=error msg="Failed to update container state for ce445f81c06114e1d9f61dd7f989f39d53b9b0a7ef1d708d62e7fcc8f8d9329c: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"ce445f81c06114e1d9f61dd7f989f39d53b9b0a7ef1d708d62e7fcc8f8d9329c\\\" does not exist\"\ncontainer \"ce445f81c06114e1d9f61dd7f989f39d53b9b0a7ef1d708d62e7fcc8f8d9329c\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.244719045Z" level=error msg="Failed to update container state for 027e4e74e2d9e5c599d18c10b2aada6f44fd57270052cfa8c2697228dc2c6568: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"027e4e74e2d9e5c599d18c10b2aada6f44fd57270052cfa8c2697228dc2c6568\\\" does not exist\"\ncontainer \"027e4e74e2d9e5c599d18c10b2aada6f44fd57270052cfa8c2697228dc2c6568\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.257169178Z" level=error msg="Failed to update container state for 0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66\\\" does not exist\"\ncontainer \"0f1d181a51923b71a8c9656fbcdaeb107c270f8a06a3ad1798edc1378bff1b66\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.269071075Z" level=error msg="Failed to update container state for 2eb925a38dde3615c382fc88effc40690917d8bee0ee4f89aa920834a6432ffa: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"2eb925a38dde3615c382fc88effc40690917d8bee0ee4f89aa920834a6432ffa\\\" does not exist\"\ncontainer \"2eb925a38dde3615c382fc88effc40690917d8bee0ee4f89aa920834a6432ffa\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.281687309Z" level=error msg="Failed to update container state for fa2190fa1040ee01540dbc6246afa3ed28278757193f46497740f98f208305f0: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"fa2190fa1040ee01540dbc6246afa3ed28278757193f46497740f98f208305f0\\\" does not exist\"\ncontainer \"fa2190fa1040ee01540dbc6246afa3ed28278757193f46497740f98f208305f0\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.294324905Z" level=error msg="Failed to update container state for b11a67a0f8782086f120e3966570686c284c001eea6d1baa75359d8d3192e065: stdout: , stderr: time=\"2020-11-09T21:50:31Z\" level=error msg=\"container \\\"b11a67a0f8782086f120e3966570686c284c001eea6d1baa75359d8d3192e065\\\" does not exist\"\ncontainer \"b11a67a0f8782086f120e3966570686c284c001eea6d1baa75359d8d3192e065\" does not exist\n"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.294893198Z" level=error msg="Error checking loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:31 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:31.294948513Z" level=error msg="Error checking loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:31 crio-20201109134622-342799 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	* Nov 09 21:50:43 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:43.138273225Z" level=error msg="Ignoring error tearing down loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:43 crio-20201109134622-342799 crio[419]: time="2020-11-09 21:50:43.498840854Z" level=error msg="Ignoring error tearing down loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 systemd[1]: Stopping Container Runtime Interface for OCI (CRI-O)...
	* Nov 09 21:50:46 crio-20201109134622-342799 systemd[1]: crio.service: Succeeded.
	* Nov 09 21:50:46 crio-20201109134622-342799 systemd[1]: Stopped Container Runtime Interface for OCI (CRI-O).
	* Nov 09 21:50:46 crio-20201109134622-342799 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
	* Nov 09 21:50:47 crio-20201109134622-342799 crio[3240]: time="2020-11-09 21:50:47.146967134Z" level=error msg="Error checking loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:47 crio-20201109134622-342799 crio[3240]: time="2020-11-09 21:50:47.147067826Z" level=error msg="Error checking loopback interface: failed to Statfs \"\": no such file or directory"
	* Nov 09 21:50:47 crio-20201109134622-342799 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                        ATTEMPT             POD ID
	* c835dd6c9df84       86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4                                     17 seconds ago       Running             dashboard-metrics-scraper   0                   8694b6bbf6fe4
	* a2385206f4e1f       503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2                                     17 seconds ago       Running             kubernetes-dashboard        0                   4b44a8728c5aa
	* 9c904ee6009b9       2186a1a396deb58f1ea5eaf20193a518ca05049b46ccd754ec83366b5c8c13d5                                     32 seconds ago       Running             kindnet-cni                 1                   241f57c295767
	* 2bbdc97480d6c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                     32 seconds ago       Running             busybox                     1                   932bf95124b13
	* 32c570294975d       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c                                     32 seconds ago       Running             coredns                     1                   22865c95bec02
	* f089f0193b59f       ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f                                     32 seconds ago       Running             kube-proxy                  1                   fbd549c871e59
	* 123ca3d133b5a       bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289                                     32 seconds ago       Exited              storage-provisioner         2                   fb74407ee4dd6
	* 35d3262f3358b       78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367                                     40 seconds ago       Running             kube-scheduler              1                   5fbb4926d94f5
	* 924ace492bcb4       c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264                                     41 seconds ago       Running             kube-apiserver              1                   27a2f0b2d5892
	* 6b2af65166e1e       d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2                                     41 seconds ago       Running             kube-controller-manager     0                   158c81fa54c66
	* 55a92deeac60b       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d                                     41 seconds ago       Running             etcd                        1                   f1c3b3e0dc6c7
	* 954641fb62a5d       docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998    About a minute ago   Exited              busybox                     0                   7cc86efc09350
	* 027e4e74e2d9e       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c                                     2 minutes ago        Exited              coredns                     0                   53e7311251b35
	* ce445f81c0611       docker.io/kindest/kindnetd@sha256:46e34ccb3e08557767b7c80e957741d9f2590968ff32646875632d40cf62adad   3 minutes ago        Exited              kindnet-cni                 0                   f669579f75dba
	* b11a67a0f8782       ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f                                     3 minutes ago        Exited              kube-proxy                  0                   608d48dcacece
	* fa2190fa1040e       c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264                                     3 minutes ago        Exited              kube-apiserver              0                   16b3f4010c0a5
	* 2eb925a38dde3       78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367                                     3 minutes ago        Exited              kube-scheduler              0                   6a9ffdf3d1099
	* 0535df8d07187       d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2                                     3 minutes ago        Exited              kube-controller-manager     0                   8088e396f4763
	* f9d705b3e01f9       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d                                     3 minutes ago        Exited              etcd                        0                   ecc5233f346a9
	* 
	* ==> coredns [027e4e74e2d9e5c599d18c10b2aada6f44fd57270052cfa8c2697228dc2c6568] <==
	* .:53
	* 2020-11-09T21:48:34.504Z [INFO] CoreDNS-1.3.1
	* 2020-11-09T21:48:34.504Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	* CoreDNS-1.3.1
	* linux/amd64, go1.11.4, 6b56a9c
	* 2020-11-09T21:48:34.504Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
	* [INFO] SIGTERM: Shutting down servers then terminating
	* 
	* ==> coredns [32c570294975d3d7dfd7878495a9dd37ec0ea661ca68fed714a34d76a0c12dfa] <==
	* .:53
	* 2020-11-09T21:50:49.789Z [INFO] CoreDNS-1.3.1
	* 2020-11-09T21:50:49.789Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	* CoreDNS-1.3.1
	* linux/amd64, go1.11.4, 6b56a9c
	* 2020-11-09T21:50:49.789Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
	* E1109 21:51:14.790379       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1109 21:51:14.790515       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1109 21:51:14.790767       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* 
	* ==> describe nodes <==
	* Name:               crio-20201109134622-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=crio-20201109134622-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=crio-20201109134622-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_47_53_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:47:48 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:50:42 +0000   Mon, 09 Nov 2020 21:47:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:50:42 +0000   Mon, 09 Nov 2020 21:47:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:50:42 +0000   Mon, 09 Nov 2020 21:47:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:50:42 +0000   Mon, 09 Nov 2020 21:47:43 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.70.16
	*   Hostname:    crio-20201109134622-342799
	* Capacity:
	*  cpu:                8
	*  ephemeral-storage:  515928484Ki
	*  hugepages-1Gi:      0
	*  hugepages-2Mi:      0
	*  memory:             30887000Ki
	*  pods:               110
	* Allocatable:
	*  cpu:                8
	*  ephemeral-storage:  515928484Ki
	*  hugepages-1Gi:      0
	*  hugepages-2Mi:      0
	*  memory:             30887000Ki
	*  pods:               110
	* System Info:
	*  Machine ID:                 2fa66088a34e46d282e6f7e772ab4aea
	*  System UUID:                16c119b5-8489-44d3-86c7-7f1b70fd0010
	*  Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*  Kernel Version:             4.9.0-14-amd64
	*  OS Image:                   Ubuntu 20.04.1 LTS
	*  Operating System:           linux
	*  Architecture:               amd64
	*  Container Runtime Version:  cri-o://1.18.3
	*  Kubelet Version:            v1.15.7
	*  Kube-Proxy Version:         v1.15.7
	* PodCIDR:                     10.244.0.0/24
	* Non-terminated Pods:         (11 in total)
	*   Namespace                  Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                  ----                                                  ------------  ----------  ---------------  -------------  ---
	*   default                    busybox                                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	*   kube-system                coredns-5d4dd4b4db-wczxf                              100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m10s
	*   kube-system                etcd-crio-20201109134622-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	*   kube-system                kindnet-n7x9d                                         100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m10s
	*   kube-system                kube-apiserver-crio-20201109134622-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	*   kube-system                kube-controller-manager-crio-20201109134622-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	*   kube-system                kube-proxy-q5gpm                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	*   kube-system                kube-scheduler-crio-20201109134622-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	*   kube-system                storage-provisioner                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	*   kubernetes-dashboard       dashboard-metrics-scraper-c8b69c96c-d6s6b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	*   kubernetes-dashboard       kubernetes-dashboard-5ddb79bb9f-ghs7v                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (9%)   100m (1%)
	*   memory             120Mi (0%)  220Mi (0%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                    From                                    Message
	*   ----    ------                   ----                   ----                                    -------
	*   Normal  NodeHasSufficientMemory  3m37s (x7 over 3m37s)  kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    3m37s (x8 over 3m37s)  kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     3m37s (x8 over 3m37s)  kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 3m8s                   kube-proxy, crio-20201109134622-342799  Starting kube-proxy.
	*   Normal  Starting                 43s                    kubelet, crio-20201109134622-342799     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  43s                    kubelet, crio-20201109134622-342799     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientMemory  42s (x8 over 43s)      kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    42s (x8 over 43s)      kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     42s (x7 over 43s)      kubelet, crio-20201109134622-342799     Node crio-20201109134622-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 32s                    kube-proxy, crio-20201109134622-342799  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.000002] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +4.031819] net_ratelimit: 1 callbacks suppressed
	* [  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000003] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000001] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +0.000028] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000003] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +6.213984] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev cni0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 32 61 1b c2 2c d7 08 06        ......2a..,...
	* [  +0.000004] IPv4: martian source 10.85.0.2 from 10.85.0.2, on dev eth0
	* [  +0.000001] ll header: 00000000: ff ff ff ff ff ff 32 61 1b c2 2c d7 08 06        ......2a..,...
	* [  +0.555222] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vethab3a4d18
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff da b2 d2 e5 1b 80 08 06        ..............
	* [  +0.032176] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethde50f238
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 4a 94 51 4e 3f 5e 08 06        ......J.QN?^..
	* [  +1.390019] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000004] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000001] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [  +0.003818] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-0d123abca8e2
	* [  +0.000004] ll header: 00000000: 02 42 cf 5a 89 34 02 42 c0 a8 46 10 08 00        .B.Z.4.B..F...
	* [Nov 9 21:51] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth30a282a7
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff f2 42 0b 96 2c 5b 08 06        .......B..,[..
	* [  +5.144853] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [55a92deeac60bdca4d5cac0d0d2beae33d2cde265c9a526e7545802eb78acba9] <==
	* 2020-11-09 21:50:38.063263 I | embed: serving client requests on 192.168.70.16:2379
	* 2020-11-09 21:50:38.064215 I | embed: serving client requests on 127.0.0.1:2379
	* proto: no coders for int
	* proto: no encoder for ValueSize int [GetProperties]
	* 2020-11-09 21:50:59.570967 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:3432" took too long (125.850046ms) to execute
	* 2020-11-09 21:50:59.571113 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-5d4dd4b4db-wczxf.1645f5635df031c3\" " with result "range_response_count:1 size:515" took too long (214.824381ms) to execute
	* 2020-11-09 21:51:06.749891 W | wal: sync duration of 2.375672244s, expected less than 1s
	* 2020-11-09 21:51:07.036992 W | etcdserver: request "header:<ID:11492471218553520308 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.70.16\" mod_revision:588 > success:<request_put:<key:\"/registry/masterleases/192.168.70.16\" value_size:68 lease:2269099181698744498 >> failure:<request_range:<key:\"/registry/masterleases/192.168.70.16\" > >>" with result "size:16" took too long (286.762387ms) to execute
	* 2020-11-09 21:51:07.037173 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:3508" took too long (2.592072771s) to execute
	* 2020-11-09 21:51:07.038059 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3834" took too long (1.174293488s) to execute
	* 2020-11-09 21:51:09.509225 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (2.353197172s) to execute
	* 2020-11-09 21:51:09.509280 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-5d4dd4b4db-wczxf.1645f5635df031c3\" " with result "range_response_count:1 size:515" took too long (153.034025ms) to execute
	* 2020-11-09 21:51:09.509365 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:5" took too long (1.944283324s) to execute
	* 2020-11-09 21:51:09.509413 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:3508" took too long (1.460401622s) to execute
	* 2020-11-09 21:51:11.166444 W | wal: sync duration of 1.647205331s, expected less than 1s
	* 2020-11-09 21:51:12.260678 W | wal: sync duration of 1.093831389s, expected less than 1s
	* 2020-11-09 21:51:12.260858 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (2.747932882s) to execute
	* 2020-11-09 21:51:12.261027 W | etcdserver: request "header:<ID:11492471218553520319 > lease_revoke:<id:1f7d75aefd488042>" with result "size:29" took too long (1.094293392s) to execute
	* 2020-11-09 21:51:12.261107 W | etcdserver: failed to revoke 1f7d75aefd488042 ("lease not found")
	* 2020-11-09 21:51:12.261128 W | etcdserver: failed to revoke 1f7d75aefd488042 ("lease not found")
	* 2020-11-09 21:51:12.261138 W | etcdserver: failed to revoke 1f7d75aefd488042 ("lease not found")
	* 2020-11-09 21:51:12.261216 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:2 size:3508" took too long (2.212400075s) to execute
	* 2020-11-09 21:51:12.261295 W | etcdserver: failed to revoke 1f7d75aefd488042 ("lease not found")
	* 2020-11-09 21:51:12.261387 W | etcdserver: failed to revoke 1f7d75aefd488042 ("lease not found")
	* 2020-11-09 21:51:12.261436 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.350224788s) to execute
	* 
	* ==> etcd [f9d705b3e01f96055ac6a99a78db798454ad08d51b30f585ad4a38d4e9009787] <==
	* 2020-11-09 21:48:59.362373 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3588" took too long (12.989808712s) to execute
	* 2020-11-09 21:48:59.362490 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (11.172934921s) to execute
	* 2020-11-09 21:48:59.362605 W | etcdserver: read-only range request "key:\"/registry/volumeattachments\" range_end:\"/registry/volumeattachmentt\" count_only:true " with result "range_response_count:0 size:5" took too long (11.959868192s) to execute
	* 2020-11-09 21:48:59.362745 W | etcdserver: read-only range request "key:\"/registry/clusterroles\" range_end:\"/registry/clusterrolet\" count_only:true " with result "range_response_count:0 size:7" took too long (12.099491944s) to execute
	* 2020-11-09 21:48:59.363057 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.363076 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.363099 W | etcdserver: read-only range request "key:\"/registry/statefulsets\" range_end:\"/registry/statefulsett\" count_only:true " with result "range_response_count:0 size:5" took too long (13.858083946s) to execute
	* 2020-11-09 21:48:59.363212 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.363225 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.363233 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (13.782526299s) to execute
	* 2020-11-09 21:48:59.363241 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.363376 W | etcdserver: failed to revoke 1f7d75aefa9f622b ("lease not found")
	* 2020-11-09 21:48:59.380873 W | etcdserver: read-only range request "key:\"/registry/cronjobs\" range_end:\"/registry/cronjobt\" count_only:true " with result "range_response_count:0 size:5" took too long (5.465839894s) to execute
	* 2020-11-09 21:48:59.381157 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (4.732427745s) to execute
	* 2020-11-09 21:48:59.382929 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:4 size:7989" took too long (4.874824141s) to execute
	* 2020-11-09 21:48:59.383155 W | etcdserver: read-only range request "key:\"/registry/jobs\" range_end:\"/registry/jobt\" count_only:true " with result "range_response_count:0 size:5" took too long (4.5310944s) to execute
	* 2020-11-09 21:48:59.383878 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (2.478789707s) to execute
	* 2020-11-09 21:48:59.384069 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (1.447378192s) to execute
	* 2020-11-09 21:48:59.384238 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (1.502418532s) to execute
	* 2020-11-09 21:48:59.384782 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (6.24410582s) to execute
	* 2020-11-09 21:48:59.385009 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (5.589339871s) to execute
	* 2020-11-09 21:48:59.385819 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (6.48034383s) to execute
	* 2020-11-09 21:48:59.386299 W | etcdserver: read-only range request "key:\"/registry/minions/crio-20201109134622-342799\" " with result "range_response_count:1 size:3588" took too long (3.644987743s) to execute
	* 2020-11-09 21:48:59.387369 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:5" took too long (4.418420626s) to execute
	* 2020-11-09 21:48:59.388668 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/crio-20201109134622-342799\" " with result "range_response_count:1 size:360" took too long (854.628936ms) to execute
	* 
	* ==> kernel <==
	*  21:51:18 up  1:33,  0 users,  load average: 9.72, 9.97, 8.47
	* Linux crio-20201109134622-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [924ace492bcb448471ddfc805e99b498dbe7486ebf84c77aa690645ffa77bca4] <==
	* Trace[851991797]: [1.175301273s] [1.175301273s] END
	* I1109 21:51:07.038922       1 trace.go:81] Trace[1587229739]: "List etcd3: key=/pods/kubernetes-dashboard, resourceVersion=, limit: 0, continue: " (started: 2020-11-09 21:51:04.444516952 +0000 UTC m=+28.070536930) (total time: 2.594359882s):
	* Trace[1587229739]: [2.594359882s] [2.594359882s] END
	* I1109 21:51:07.039048       1 trace.go:81] Trace[1394165715]: "List /api/v1/nodes" (started: 2020-11-09 21:51:05.863207228 +0000 UTC m=+29.489227171) (total time: 1.17581666s):
	* Trace[1394165715]: [1.175397118s] [1.175387281s] Listing from storage done
	* I1109 21:51:07.039173       1 trace.go:81] Trace[1346543781]: "List /api/v1/namespaces/kubernetes-dashboard/pods" (started: 2020-11-09 21:51:04.444358263 +0000 UTC m=+28.070378211) (total time: 2.594795249s):
	* Trace[1346543781]: [2.5945764s] [2.594434015s] Listing from storage done
	* I1109 21:51:07.039286       1 trace.go:81] Trace[1638636632]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2020-11-09 21:51:04.37258274 +0000 UTC m=+27.998602696) (total time: 2.666665207s):
	* Trace[1638636632]: [2.666644857s] [2.664233405s] Transaction committed
	* I1109 21:51:09.510092       1 trace.go:81] Trace[1741058939]: "List etcd3: key=/pods/kubernetes-dashboard, resourceVersion=, limit: 0, continue: " (started: 2020-11-09 21:51:08.04844751 +0000 UTC m=+31.674467525) (total time: 1.461604973s):
	* Trace[1741058939]: [1.461604973s] [1.461604973s] END
	* I1109 21:51:09.510301       1 trace.go:81] Trace[2074990001]: "List /api/v1/namespaces/kubernetes-dashboard/pods" (started: 2020-11-09 21:51:08.048293442 +0000 UTC m=+31.674313424) (total time: 1.461994642s):
	* Trace[2074990001]: [1.461824299s] [1.461684409s] Listing from storage done
	* I1109 21:51:09.510828       1 trace.go:81] Trace[368736186]: "List etcd3: key=/jobs, resourceVersion=, limit: 500, continue: " (started: 2020-11-09 21:51:07.155371944 +0000 UTC m=+30.781391892) (total time: 2.355426182s):
	* Trace[368736186]: [2.355426182s] [2.355426182s] END
	* I1109 21:51:09.510912       1 trace.go:81] Trace[2041738141]: "List /apis/batch/v1/jobs" (started: 2020-11-09 21:51:07.155286386 +0000 UTC m=+30.781306333) (total time: 2.355610802s):
	* Trace[2041738141]: [2.355556621s] [2.355482461s] Listing from storage done
	* I1109 21:51:12.261414       1 trace.go:81] Trace[560758291]: "List etcd3: key=/cronjobs, resourceVersion=, limit: 500, continue: " (started: 2020-11-09 21:51:09.512487538 +0000 UTC m=+33.138507484) (total time: 2.748879777s):
	* Trace[560758291]: [2.748879777s] [2.748879777s] END
	* I1109 21:51:12.261532       1 trace.go:81] Trace[954873413]: "List /apis/batch/v1beta1/cronjobs" (started: 2020-11-09 21:51:09.512403006 +0000 UTC m=+33.138422959) (total time: 2.749115025s):
	* Trace[954873413]: [2.749032942s] [2.748958744s] Listing from storage done
	* I1109 21:51:12.262732       1 trace.go:81] Trace[2049262841]: "List etcd3: key=/pods/kubernetes-dashboard, resourceVersion=, limit: 0, continue: " (started: 2020-11-09 21:51:10.048312248 +0000 UTC m=+33.674332200) (total time: 2.214379731s):
	* Trace[2049262841]: [2.214379731s] [2.214379731s] END
	* I1109 21:51:12.263165       1 trace.go:81] Trace[173883954]: "List /api/v1/namespaces/kubernetes-dashboard/pods" (started: 2020-11-09 21:51:10.048231038 +0000 UTC m=+33.674251069) (total time: 2.214905975s):
	* Trace[173883954]: [2.21451847s] [2.214449996s] Listing from storage done
	* 
	* ==> kube-apiserver [fa2190fa1040ee01540dbc6246afa3ed28278757193f46497740f98f208305f0] <==
	* I1109 21:49:23.491836       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:23.491914       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:23.501725       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:23.959323       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	* I1109 21:49:23.959694       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:23.959838       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:23.970037       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:25.035675       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	* I1109 21:49:25.036146       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:25.036515       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:25.048920       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.491859       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	* I1109 21:49:43.492096       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.492243       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.503257       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.959493       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	* I1109 21:49:43.959712       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.959844       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:43.970821       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:45.035849       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	* I1109 21:49:45.036058       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:45.036170       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:45.036202       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:45.036268       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* I1109 21:49:45.047481       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	* 
	* ==> kube-controller-manager [0535df8d071879c8406f331b9f4409352432de9cc58d427b7dfce4793a673601] <==
	* W1109 21:48:07.206493       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="crio-20201109134622-342799" does not exist
	* I1109 21:48:07.207451       1 controller_utils.go:1036] Caches are synced for taint controller
	* I1109 21:48:07.207605       1 node_lifecycle_controller.go:1189] Initializing eviction metric for zone: 
	* W1109 21:48:07.207719       1 node_lifecycle_controller.go:863] Missing timestamp for Node crio-20201109134622-342799. Assuming now as a timestamp.
	* I1109 21:48:07.207787       1 node_lifecycle_controller.go:1089] Controller detected that zone  is now in state Normal.
	* I1109 21:48:07.208014       1 taint_manager.go:182] Starting NoExecuteTaintManager
	* I1109 21:48:07.208221       1 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crio-20201109134622-342799", UID:"ebe90bde-49f9-4fdc-a837-98c969cfa070", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node crio-20201109134622-342799 event: Registered Node crio-20201109134622-342799 in Controller
	* I1109 21:48:07.219638       1 controller_utils.go:1036] Caches are synced for TTL controller
	* I1109 21:48:07.267508       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* I1109 21:48:07.267539       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:48:07.288637       1 controller_utils.go:1036] Caches are synced for attach detach controller
	* I1109 21:48:07.288939       1 controller_utils.go:1036] Caches are synced for daemon sets controller
	* I1109 21:48:07.293240       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"879c1af6-b019-4309-8790-68862df641bc", APIVersion:"apps/v1", ResourceVersion:"339", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-5d4dd4b4db to 1
	* I1109 21:48:07.295297       1 controller_utils.go:1036] Caches are synced for persistent volume controller
	* I1109 21:48:07.301396       1 controller_utils.go:1036] Caches are synced for node controller
	* I1109 21:48:07.301431       1 range_allocator.go:157] Starting range CIDR allocator
	* I1109 21:48:07.301457       1 controller_utils.go:1029] Waiting for caches to sync for cidrallocator controller
	* I1109 21:48:07.357755       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* I1109 21:48:07.357770       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* I1109 21:48:07.357769       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* I1109 21:48:07.370941       1 event.go:258] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"6eb71746-9284-4ef3-a4e6-c6eef2f9905c", APIVersion:"apps/v1", ResourceVersion:"223", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-n7x9d
	* I1109 21:48:07.371494       1 event.go:258] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"e1c3926b-aedc-4ba1-a531-89e830ed4064", APIVersion:"apps/v1", ResourceVersion:"209", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-q5gpm
	* I1109 21:48:07.374382       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5d4dd4b4db", UID:"c6d65372-6830-47ea-aafa-45e55da8d5f8", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-5d4dd4b4db-tfkqn
	* I1109 21:48:07.457581       1 controller_utils.go:1036] Caches are synced for cidrallocator controller
	* I1109 21:48:07.476734       1 range_allocator.go:310] Set node crio-20201109134622-342799 PodCIDR to 10.244.0.0/24
	* 
	* ==> kube-controller-manager [6b2af65166e1e030c3fcc83bca490d83cf9a98fb4405c0fe827ab5512a89bf51] <==
	* I1109 21:50:57.398200       1 controller_utils.go:1036] Caches are synced for GC controller
	* I1109 21:50:57.398307       1 controller_utils.go:1036] Caches are synced for certificate controller
	* I1109 21:50:57.402577       1 controller_utils.go:1036] Caches are synced for endpoint controller
	* I1109 21:50:57.419268       1 controller_utils.go:1036] Caches are synced for stateful set controller
	* I1109 21:50:57.445618       1 controller_utils.go:1036] Caches are synced for cidrallocator controller
	* I1109 21:50:57.479270       1 controller_utils.go:1036] Caches are synced for daemon sets controller
	* I1109 21:50:57.491488       1 controller_utils.go:1036] Caches are synced for bootstrap_signer controller
	* I1109 21:50:57.520600       1 controller_utils.go:1036] Caches are synced for attach detach controller
	* I1109 21:50:57.527898       1 controller_utils.go:1036] Caches are synced for persistent volume controller
	* I1109 21:50:57.568816       1 controller_utils.go:1036] Caches are synced for expand controller
	* I1109 21:50:57.600129       1 controller_utils.go:1036] Caches are synced for PV protection controller
	* I1109 21:50:57.892093       1 controller_utils.go:1036] Caches are synced for ReplicationController controller
	* I1109 21:50:57.908118       1 controller_utils.go:1036] Caches are synced for HPA controller
	* I1109 21:50:57.996544       1 controller_utils.go:1036] Caches are synced for disruption controller
	* I1109 21:50:57.996573       1 disruption.go:338] Sending events to api server.
	* I1109 21:50:57.998441       1 controller_utils.go:1036] Caches are synced for deployment controller
	* I1109 21:50:58.004625       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"f980d012-f11c-4249-b5ec-8b25258576cb", APIVersion:"apps/v1", ResourceVersion:"551", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5ddb79bb9f to 1
	* I1109 21:50:58.007326       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"201432ef-b23e-49c5-b8fe-024dbc5e4f13", APIVersion:"apps/v1", ResourceVersion:"550", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-c8b69c96c to 1
	* I1109 21:50:58.019549       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5ddb79bb9f", UID:"dc0f78f9-c86c-4cf3-b1d3-c2c4a111708f", APIVersion:"apps/v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5ddb79bb9f-ghs7v
	* I1109 21:50:58.024574       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-c8b69c96c", UID:"3fd911ab-ee27-4009-8693-0a72acfb94e6", APIVersion:"apps/v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-c8b69c96c-d6s6b
	* I1109 21:50:58.169138       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* I1109 21:50:58.169282       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:50:58.170677       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* I1109 21:50:58.198305       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* I1109 21:50:58.199356       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* 
	* ==> kube-proxy [b11a67a0f8782086f120e3966570686c284c001eea6d1baa75359d8d3192e065] <==
	* W1109 21:48:08.923186       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
	* I1109 21:48:08.935724       1 server_others.go:143] Using iptables Proxier.
	* I1109 21:48:08.936055       1 server.go:534] Version: v1.15.7
	* I1109 21:48:09.031928       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:48:09.032120       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:48:09.033028       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:48:09.034082       1 config.go:187] Starting service config controller
	* I1109 21:48:09.034150       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
	* I1109 21:48:09.034278       1 config.go:96] Starting endpoints config controller
	* I1109 21:48:09.034359       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
	* I1109 21:48:09.134539       1 controller_utils.go:1036] Caches are synced for service config controller
	* I1109 21:48:09.134568       1 controller_utils.go:1036] Caches are synced for endpoints config controller
	* 
	* ==> kube-proxy [f089f0193b59f54e6918af2c984f2810ae84282159d2183ec3136dc1fdf8a1a8] <==
	* W1109 21:50:45.040787       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
	* I1109 21:50:45.090320       1 server_others.go:143] Using iptables Proxier.
	* I1109 21:50:45.103508       1 server.go:534] Version: v1.15.7
	* I1109 21:50:45.112345       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:50:45.112536       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:50:45.112648       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:50:45.112827       1 config.go:96] Starting endpoints config controller
	* I1109 21:50:45.112857       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
	* I1109 21:50:45.112997       1 config.go:187] Starting service config controller
	* I1109 21:50:45.113017       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
	* I1109 21:50:45.213209       1 controller_utils.go:1036] Caches are synced for service config controller
	* I1109 21:50:45.213345       1 controller_utils.go:1036] Caches are synced for endpoints config controller
	* 
	* ==> kube-scheduler [2eb925a38dde3615c382fc88effc40690917d8bee0ee4f89aa920834a6432ffa] <==
	* W1109 21:47:44.683256       1 authentication.go:55] Authentication is disabled
	* I1109 21:47:44.683277       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	* I1109 21:47:44.684258       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	* E1109 21:47:48.080256       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:47:48.080463       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:47:48.080613       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:47:48.080644       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:47:48.080739       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:47:48.080809       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:47:48.080905       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:48.158424       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:47:48.160035       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:47:48.165650       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:47:49.082254       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:47:49.162497       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:47:49.165053       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:47:49.166309       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:47:49.167268       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:47:49.168337       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:47:49.169452       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:47:49.170868       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:47:49.172179       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:47:49.173066       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:48:07.184059       1 factory.go:702] pod is already present in the activeQ
	* E1109 21:48:07.260236       1 factory.go:702] pod is already present in the activeQ
	* 
	* ==> kube-scheduler [35d3262f3358bf474919114424809a44099bb6a9cd97c538fe57a06992abf607] <==
	* I1109 21:50:37.421083       1 serving.go:319] Generated self-signed cert in-memory
	* W1109 21:50:38.048191       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	* W1109 21:50:38.048228       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	* W1109 21:50:38.048244       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	* I1109 21:50:38.052337       1 server.go:142] Version: v1.15.7
	* I1109 21:50:38.052412       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	* W1109 21:50:38.054125       1 authorization.go:47] Authorization is disabled
	* W1109 21:50:38.054148       1 authentication.go:55] Authentication is disabled
	* I1109 21:50:38.054164       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	* I1109 21:50:38.054660       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	* E1109 21:50:42.571241       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:50:25 UTC, end at Mon 2020-11-09 21:51:19 UTC. --
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: I1109 21:50:46.138382     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000ade1a0, TRANSIENT_FAILURE
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: I1109 21:50:46.138387     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000936c0, CONNECTING
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: I1109 21:50:46.138398     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000936c0, TRANSIENT_FAILURE
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.635128     901 remote_runtime.go:182] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.635219     901 kuberuntime_sandbox.go:210] ListPodSandbox failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.635237     901 kubelet_pods.go:1043] Error listing containers: &status.statusError{Code:14, Message:"all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"", Details:[]*any.Any(nil)}
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.635279     901 kubelet.go:1977] Failed cleaning pods: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.924716     901 remote_runtime.go:182] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.924855     901 kuberuntime_sandbox.go:210] ListPodSandbox failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:46 crio-20201109134622-342799 kubelet[901]: E1109 21:50:46.924875     901 generic.go:205] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 09 21:50:47 crio-20201109134622-342799 kubelet[901]: I1109 21:50:47.138453     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000ade1a0, CONNECTING
	* Nov 09 21:50:47 crio-20201109134622-342799 kubelet[901]: I1109 21:50:47.138453     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000936c0, CONNECTING
	* Nov 09 21:50:47 crio-20201109134622-342799 kubelet[901]: I1109 21:50:47.138545     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000936c0, READY
	* Nov 09 21:50:47 crio-20201109134622-342799 kubelet[901]: I1109 21:50:47.138601     901 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000ade1a0, READY
	* Nov 09 21:50:55 crio-20201109134622-342799 kubelet[901]: E1109 21:50:55.027867     901 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods.slice": failed to get cgroup stats for "/kubepods.slice": failed to get container info for "/kubepods.slice": unknown container "/kubepods.slice"
	* Nov 09 21:50:55 crio-20201109134622-342799 kubelet[901]: E1109 21:50:55.027961     901 helpers.go:712] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:50:58 crio-20201109134622-342799 kubelet[901]: I1109 21:50:58.165484     901 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-m4km7" (UniqueName: "kubernetes.io/secret/4fab7cfe-a4f7-4fac-aa3e-f4ced316c49d-kubernetes-dashboard-token-m4km7") pod "kubernetes-dashboard-5ddb79bb9f-ghs7v" (UID: "4fab7cfe-a4f7-4fac-aa3e-f4ced316c49d")
	* Nov 09 21:50:58 crio-20201109134622-342799 kubelet[901]: I1109 21:50:58.165562     901 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1f4847a1-1cf8-4611-9454-1e266b2e40c7-tmp-volume") pod "dashboard-metrics-scraper-c8b69c96c-d6s6b" (UID: "1f4847a1-1cf8-4611-9454-1e266b2e40c7")
	* Nov 09 21:50:58 crio-20201109134622-342799 kubelet[901]: I1109 21:50:58.165652     901 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-m4km7" (UniqueName: "kubernetes.io/secret/1f4847a1-1cf8-4611-9454-1e266b2e40c7-kubernetes-dashboard-token-m4km7") pod "dashboard-metrics-scraper-c8b69c96c-d6s6b" (UID: "1f4847a1-1cf8-4611-9454-1e266b2e40c7")
	* Nov 09 21:50:58 crio-20201109134622-342799 kubelet[901]: I1109 21:50:58.165738     901 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/4fab7cfe-a4f7-4fac-aa3e-f4ced316c49d-tmp-volume") pod "kubernetes-dashboard-5ddb79bb9f-ghs7v" (UID: "4fab7cfe-a4f7-4fac-aa3e-f4ced316c49d")
	* Nov 09 21:51:05 crio-20201109134622-342799 kubelet[901]: E1109 21:51:05.038030     901 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods.slice": failed to get cgroup stats for "/kubepods.slice": failed to get container info for "/kubepods.slice": unknown container "/kubepods.slice"
	* Nov 09 21:51:05 crio-20201109134622-342799 kubelet[901]: E1109 21:51:05.038077     901 helpers.go:712] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:51:15 crio-20201109134622-342799 kubelet[901]: E1109 21:51:15.021720     901 pod_workers.go:190] Error syncing pod f185742f-0b73-4465-920b-555675a3559b ("storage-provisioner_kube-system(f185742f-0b73-4465-920b-555675a3559b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f185742f-0b73-4465-920b-555675a3559b)"
	* Nov 09 21:51:15 crio-20201109134622-342799 kubelet[901]: E1109 21:51:15.052135     901 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods.slice": failed to get cgroup stats for "/kubepods.slice": failed to get container info for "/kubepods.slice": unknown container "/kubepods.slice"
	* Nov 09 21:51:15 crio-20201109134622-342799 kubelet[901]: E1109 21:51:15.052180     901 helpers.go:712] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* 
	* ==> kubernetes-dashboard [a2385206f4e1fcc8de509bc48b88929c0f3527c5dafab15b5f18c5f7fe6170c8] <==
	* 2020/11/09 21:50:59 Using namespace: kubernetes-dashboard
	* 2020/11/09 21:50:59 Using in-cluster config to connect to apiserver
	* 2020/11/09 21:50:59 Using secret token for csrf signing
	* 2020/11/09 21:50:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	* 2020/11/09 21:50:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	* 2020/11/09 21:50:59 Successful initial request to the apiserver, version: v1.15.7
	* 2020/11/09 21:50:59 Generating JWE encryption key
	* 2020/11/09 21:50:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	* 2020/11/09 21:50:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	* 2020/11/09 21:51:00 Initializing JWE encryption key from synchronized object
	* 2020/11/09 21:51:00 Creating in-cluster Sidecar client
	* 2020/11/09 21:51:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	* 2020/11/09 21:51:00 Serving insecurely on HTTP port: 9090
	* 2020/11/09 21:50:59 Starting overwatch
	* 
	* ==> storage-provisioner [123ca3d133b5ab8f7f137c430ce09feeb423cebe0455ec9e3a44a77f3dccdfa6] <==
	* F1109 21:51:14.645854       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:51:18.093565  576291 out.go:286] unable to execute * 2020-11-09 21:51:07.036992 W | etcdserver: request "header:<ID:11492471218553520308 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.70.16\" mod_revision:588 > success:<request_put:<key:\"/registry/masterleases/192.168.70.16\" value_size:68 lease:2269099181698744498 >> failure:<request_range:<key:\"/registry/masterleases/192.168.70.16\" > >>" with result "size:16" took too long (286.762387ms) to execute
	: html/template:* 2020-11-09 21:51:07.036992 W | etcdserver: request "header:<ID:11492471218553520308 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.70.16\" mod_revision:588 > success:<request_put:<key:\"/registry/masterleases/192.168.70.16\" value_size:68 lease:2269099181698744498 >> failure:<request_range:<key:\"/registry/masterleases/192.168.70.16\" > >>" with result "size:16" took too long (286.762387ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201109134622-342799 -n crio-20201109134622-342799
helpers_test.go:255: (dbg) Run:  kubectl --context crio-20201109134622-342799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestStartStop/group/crio/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context crio-20201109134622-342799 describe pod 

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/VerifyKubernetesImages
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context crio-20201109134622-342799 describe pod : exit status 1 (108.487686ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context crio-20201109134622-342799 describe pod : exit status 1
--- FAIL: TestStartStop/group/crio/serial/VerifyKubernetesImages (8.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20201109134950-342799 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201109132758-342799
start_stop_delete_test.go:232: v1.19.2 images mismatch (-want +got):
[]string{
- 	"docker.io/kubernetesui/dashboard:v2.0.3",
- 	"docker.io/kubernetesui/metrics-scraper:v1.0.4",
	"gcr.io/k8s-minikube/storage-provisioner:v3",
	"k8s.gcr.io/coredns:1.7.0",
	... // 4 identical elements
	"k8s.gcr.io/kube-scheduler:v1.19.2",
	"k8s.gcr.io/pause:3.2",
+ 	"kubernetesui/dashboard:v2.0.3",
+ 	"kubernetesui/metrics-scraper:v1.0.4",
}
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect newest-cni-20201109134950-342799
helpers_test.go:229: (dbg) docker inspect newest-cni-20201109134950-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24",
	        "Created": "2020-11-09T21:49:52.558857361Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 583306,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:51:35.661262577Z",
	            "FinishedAt": "2020-11-09T21:51:33.083357923Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24/hosts",
	        "LogPath": "/var/lib/docker/containers/4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24/4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24-json.log",
	        "Name": "/newest-cni-20201109134950-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20201109134950-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20201109134950-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9186d985e1d5c956838acea77f1fa674eab7a4ac9591176e19509620636f3624-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9186d985e1d5c956838acea77f1fa674eab7a4ac9591176e19509620636f3624/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9186d985e1d5c956838acea77f1fa674eab7a4ac9591176e19509620636f3624/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9186d985e1d5c956838acea77f1fa674eab7a4ac9591176e19509620636f3624/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20201109134950-342799",
	                "Source": "/var/lib/docker/volumes/newest-cni-20201109134950-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20201109134950-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20201109134950-342799",
	                "name.minikube.sigs.k8s.io": "newest-cni-20201109134950-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c140784cf313cbb340d849a13db9f6d6d98a9b36d37dde8f2f25b655635f1ade",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c140784cf313",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20201109134950-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4b41662211fb"
	                    ],
	                    "NetworkID": "6796ea11076b994fb365bae5a339562eb0a3e499155be8b8ac885b173abd65a3",
	                    "EndpointID": "d9ef55aef290b39a723a9f4ef67c909ae32e2f13ce3fd3984feb4576ad350c88",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799
helpers_test.go:238: <<< TestStartStop/group/newest-cni/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20201109134950-342799 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-20201109134950-342799 logs -n 25: (3.411363975s)
helpers_test.go:246: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:51:36 UTC, end at Mon 2020-11-09 21:52:34 UTC. --
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.737598948Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.737637761Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.737660708Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.737680280Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.741731445Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.741772175Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.741832739Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.741854436Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.766277467Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Nov 09 21:51:53 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:53.940836641Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 09 21:51:53 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:53.940888529Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 09 21:51:53 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:53.940899394Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 09 21:51:53 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:53.941126411Z" level=info msg="Loading containers: start."
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.206991008Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.395086621Z" level=info msg="Loading containers: done."
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.447172019Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.447345454Z" level=info msg="Daemon has completed initialization"
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.468143007Z" level=info msg="API listen on [::]:2376"
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.468199660Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 systemd[1]: Started Docker Application Container Engine.
	* Nov 09 21:52:27 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:27.023208284Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:52:28 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:28.477824134Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:52:30 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:30.550508221Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:52:32 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:32.518922188Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:52:33 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:33.706343594Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	* 2749b8d949aea       607331163122e       17 seconds ago       Running             kube-apiserver            1                   5665646506ce6
	* c714ee3191748       8603821e1a7a5       17 seconds ago       Running             kube-controller-manager   1                   5490212470de7
	* 945beac6a1937       0369cf4303ffd       17 seconds ago       Running             etcd                      1                   8109405ef81a8
	* 4fd5332423254       2f32d66b884f8       17 seconds ago       Running             kube-scheduler            1                   1186fd00c7d72
	* 6e239b637a0ae       bad58561c4be7       About a minute ago   Exited              storage-provisioner       0                   cae537ffa0a24
	* 6245c18490190       d373dd5a8593a       About a minute ago   Exited              kube-proxy                0                   39c7ae09d2835
	* 648b2e2d5a589       2f32d66b884f8       About a minute ago   Exited              kube-scheduler            0                   a8b4ca883191e
	* 742bb8f23b68a       8603821e1a7a5       About a minute ago   Exited              kube-controller-manager   0                   37cf43b6ffbd1
	* 524074355b73e       0369cf4303ffd       About a minute ago   Exited              etcd                      0                   99e66ece8710b
	* 32aa383f266ae       607331163122e       About a minute ago   Exited              kube-apiserver            0                   f6bd85a6b9e56
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20201109134950-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=newest-cni-20201109134950-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=newest-cni-20201109134950-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_50_55_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:50:51 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  newest-cni-20201109134950-342799
	*   AcquireTime:     <unset>
	*   RenewTime:       Mon, 09 Nov 2020 21:52:25 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:52:25 +0000   Mon, 09 Nov 2020 21:50:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:52:25 +0000   Mon, 09 Nov 2020 21:50:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:52:25 +0000   Mon, 09 Nov 2020 21:50:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:52:25 +0000   Mon, 09 Nov 2020 21:51:06 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.59.16
	*   Hostname:    newest-cni-20201109134950-342799
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 ca6d465f40c7494792e56eccadc1757c
	*   System UUID:                efe4ddc1-fe61-487b-92e1-189128bd4193
	*   Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.13
	*   Kubelet Version:            v1.19.2
	*   Kube-Proxy Version:         v1.19.2
	* PodCIDR:                      192.168.0.0/24
	* PodCIDRs:                     192.168.0.0/24
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-f9fd979d6-bbqqk                                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     77s
	*   kube-system                 etcd-newest-cni-20201109134950-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	*   kube-system                 kube-apiserver-newest-cni-20201109134950-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         98s
	*   kube-system                 kube-controller-manager-newest-cni-20201109134950-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         98s
	*   kube-system                 kube-proxy-r7l4h                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	*   kube-system                 kube-scheduler-newest-cni-20201109134950-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         98s
	*   kube-system                 storage-provisioner                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                650m (8%)  0 (0%)
	*   memory             70Mi (0%)  170Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                  From        Message
	*   ----    ------                   ----                 ----        -------
	*   Normal  NodeHasSufficientMemory  111s (x5 over 111s)  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    111s (x4 over 111s)  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     111s (x4 over 111s)  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 99s                  kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  99s                  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    99s                  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     99s                  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             99s                  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  98s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                88s                  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeReady
	*   Normal  Starting                 75s                  kube-proxy  Starting kube-proxy.
	*   Normal  Starting                 36s                  kubelet     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  36s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientPID     30s (x7 over 36s)    kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeHasSufficientMemory  29s (x8 over 36s)    kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    29s (x8 over 36s)    kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasNoDiskPressure
	* 
	* ==> dmesg <==
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 8e fa 6b c5 74 76 08 06        ........k.tv..
	* [  +2.105133] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 16 02 35 51 a2 b9 08 06        ........5Q....
	* [  +0.717968] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethb5ce8c55
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8a d7 8b bc 57 3f 08 06        ..........W?..
	* [  +1.162156] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000084] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000002] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000145] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000002] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.152977] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 1e 8d 25 be 2f 51 08 06        ........%./Q..
	* [  +0.854747] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000003] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000002] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.276266] IPv4: martian source 10.85.0.9 from 10.85.0.9, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 66 5c 4c 17 93 ad 08 06        ......f\L.....
	* [  +1.739521] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.004131] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* 
	* ==> etcd [524074355b73] <==
	* 2020-11-09 21:50:45.198680 I | embed: ready to serve client requests
	* 2020-11-09 21:50:45.205944 I | embed: ready to serve client requests
	* 2020-11-09 21:50:45.206908 I | etcdserver: setting up the initial cluster version to 3.4
	* 2020-11-09 21:50:45.213683 I | embed: serving client requests on 192.168.59.16:2379
	* 2020-11-09 21:50:45.213851 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:50:45.219894 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-09 21:50:45.220003 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-09 21:51:03.409275 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:51:06.545455 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (2.000046567s) to execute
	* WARNING: 2020/11/09 21:51:06 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:51:06.706974 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	* 2020-11-09 21:51:06.749860 W | wal: sync duration of 2.402983421s, expected less than 1s
	* 2020-11-09 21:51:07.036548 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" " with result "range_response_count:1 size:218" took too long (2.686492399s) to execute
	* 2020-11-09 21:51:07.037121 W | etcdserver: request "header:<ID:8039336204269454475 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" mod_revision:215 > success:<request_put:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" > >>" with result "size:16" took too long (286.834312ms) to execute
	* 2020-11-09 21:51:08.030737 W | wal: sync duration of 1.010469087s, expected less than 1s
	* 2020-11-09 21:51:09.496453 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-20201109134950-342799\" " with result "range_response_count:1 size:4345" took too long (4.072505659s) to execute
	* 2020-11-09 21:51:09.498944 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (5.038542636s) to execute
	* 2020-11-09 21:51:09.499332 W | etcdserver: request "header:<ID:8039336204269454480 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" value_size:763 lease:8039336204269454336 >> failure:<>>" with result "size:16" took too long (1.468106766s) to execute
	* 2020-11-09 21:51:12.261279 W | wal: sync duration of 4.230351973s, expected less than 1s
	* 2020-11-09 21:51:12.263707 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/namespace-controller\" " with result "range_response_count:0 size:5" took too long (2.758477528s) to execute
	* 2020-11-09 21:51:12.271236 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (2.319642491s) to execute
	* 2020-11-09 21:51:15.707227 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:51:22.573681 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/09 21:51:22 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 2020-11-09 21:51:22.664764 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> etcd [945beac6a193] <==
	* 2020-11-09 21:52:18.146786 I | etcdserver: restarting member 47984c33979a6f91 in cluster 79741b01b410835d at commit index 426
	* raft2020/11/09 21:52:18 INFO: 47984c33979a6f91 switched to configuration voters=()
	* raft2020/11/09 21:52:18 INFO: 47984c33979a6f91 became follower at term 2
	* raft2020/11/09 21:52:18 INFO: newRaft 47984c33979a6f91 [peers: [], term: 2, commit: 426, applied: 0, lastindex: 426, lastterm: 2]
	* 2020-11-09 21:52:18.159522 W | auth: simple token is not cryptographically signed
	* 2020-11-09 21:52:18.227523 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* raft2020/11/09 21:52:18 INFO: 47984c33979a6f91 switched to configuration voters=(5158957157623426961)
	* 2020-11-09 21:52:18.271992 I | etcdserver/membership: added member 47984c33979a6f91 [https://192.168.59.16:2380] to cluster 79741b01b410835d
	* 2020-11-09 21:52:18.272110 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-09 21:52:18.272156 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-09 21:52:18.313894 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-09 21:52:18.314388 I | embed: listening for metrics on http://127.0.0.1:2381
	* 2020-11-09 21:52:18.314978 I | embed: listening for peers on 192.168.59.16:2380
	* raft2020/11/09 21:52:19 INFO: 47984c33979a6f91 is starting a new election at term 2
	* raft2020/11/09 21:52:19 INFO: 47984c33979a6f91 became candidate at term 3
	* raft2020/11/09 21:52:19 INFO: 47984c33979a6f91 received MsgVoteResp from 47984c33979a6f91 at term 3
	* raft2020/11/09 21:52:19 INFO: 47984c33979a6f91 became leader at term 3
	* raft2020/11/09 21:52:19 INFO: raft.node: 47984c33979a6f91 elected leader 47984c33979a6f91 at term 3
	* 2020-11-09 21:52:19.458516 I | etcdserver: published {Name:newest-cni-20201109134950-342799 ClientURLs:[https://192.168.59.16:2379]} to cluster 79741b01b410835d
	* 2020-11-09 21:52:19.458854 I | embed: ready to serve client requests
	* 2020-11-09 21:52:19.459341 I | embed: ready to serve client requests
	* 2020-11-09 21:52:19.461127 I | embed: serving client requests on 192.168.59.16:2379
	* 2020-11-09 21:52:19.461646 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:52:31.224496 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:52:32.079685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  21:52:35 up  1:35,  0 users,  load average: 14.65, 11.32, 9.06
	* Linux newest-cni-20201109134950-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [2749b8d949ae] <==
	* I1109 21:52:25.504893       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
	* I1109 21:52:25.505096       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	* I1109 21:52:25.505385       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	* I1109 21:52:25.504120       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	* I1109 21:52:25.505128       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	* I1109 21:52:25.506264       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	* I1109 21:52:25.504186       1 autoregister_controller.go:141] Starting autoregister controller
	* I1109 21:52:25.506605       1 cache.go:32] Waiting for caches to sync for autoregister controller
	* I1109 21:52:25.506149       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I1109 21:52:25.506161       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* E1109 21:52:25.604810       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	* I1109 21:52:25.604939       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1109 21:52:25.605215       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	* I1109 21:52:25.605868       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1109 21:52:25.607114       1 cache.go:39] Caches are synced for autoregister controller
	* I1109 21:52:25.607406       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	* I1109 21:52:25.658402       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* I1109 21:52:26.501142       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I1109 21:52:26.501184       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I1109 21:52:26.509829       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	* I1109 21:52:27.769821       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* I1109 21:52:27.796893       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I1109 21:52:27.884889       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1109 21:52:27.912638       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I1109 21:52:27.931744       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* 
	* ==> kube-apiserver [32aa383f266a] <==
	* W1109 21:51:31.748992       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.760214       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.795037       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.818232       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.839727       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.884704       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.898255       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.922828       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.002043       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.036997       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.090075       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.113242       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.137692       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.148991       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.156547       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.203769       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.231625       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.235764       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.283977       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.291888       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.301237       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.383096       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.479491       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.539080       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.558715       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kube-controller-manager [742bb8f23b68] <==
	* I1109 21:51:17.677620       1 shared_informer.go:247] Caches are synced for deployment 
	* I1109 21:51:17.677665       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	* I1109 21:51:17.685553       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 2"
	* E1109 21:51:17.687052       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"c74cd31b-91a8-4807-8b33-a0854ca4ea6a", ResourceVersion:"208", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740555454, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001cb87c0), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0xc001cb87e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001cb8800), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)
(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0000a6e00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v
1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cb8820), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersi
stentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cb8840), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.
DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"",
ValueFrom:(*v1.EnvVarSource)(0xc001cb8880)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001c95380), Stdin:false, StdinOnce:false, TTY:false}},
EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00044b168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0009d6a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConf
ig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000715f38)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00044b1b8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* I1109 21:51:17.695046       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-snvwn"
	* I1109 21:51:17.757561       1 shared_informer.go:247] Caches are synced for taint 
	* I1109 21:51:17.757772       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* I1109 21:51:17.757848       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* W1109 21:51:17.757901       1 node_lifecycle_controller.go:1044] Missing timestamp for Node newest-cni-20201109134950-342799. Assuming now as a timestamp.
	* I1109 21:51:17.757956       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I1109 21:51:17.758147       1 event.go:291] "Event occurred" object="newest-cni-20201109134950-342799" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20201109134950-342799 event: Registered Node newest-cni-20201109134950-342799 in Controller"
	* I1109 21:51:17.774853       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-bbqqk"
	* I1109 21:51:17.774900       1 shared_informer.go:247] Caches are synced for disruption 
	* I1109 21:51:17.775012       1 disruption.go:339] Sending events to api server.
	* I1109 21:51:17.775929       1 shared_informer.go:247] Caches are synced for attach detach 
	* I1109 21:51:17.778110       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:51:17.778834       1 shared_informer.go:247] Caches are synced for resource quota 
	* E1109 21:51:17.782415       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"c74cd31b-91a8-4807-8b33-a0854ca4ea6a", ResourceVersion:"340", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740555454, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0010cd1e0), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0xc0010cd240)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0010cd2a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0010cd300)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0010cd360), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistent
DiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00127e880), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.S
caleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0010cd3c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolum
eSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0010cd420), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBD
VolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf
", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0010cd4e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMes
sagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00171c180), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000e4af38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000cbe690), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operat
or:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000c6b30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000e4b018)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object h
as been modified; please apply your changes to the latest version and try again
	* I1109 21:51:17.824967       1 shared_informer.go:247] Caches are synced for ReplicationController 
	* I1109 21:51:17.867045       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:51:18.138641       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:51:18.138679       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:51:18.167466       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:51:18.459787       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-f9fd979d6 to 1"
	* I1109 21:51:18.475190       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-f9fd979d6-snvwn"
	* 
	* ==> kube-controller-manager [c714ee319174] <==
	* I1109 21:52:27.998963       1 controllermanager.go:549] Started "cronjob"
	* I1109 21:52:27.999861       1 cronjob_controller.go:96] Starting CronJob Manager
	* I1109 21:52:28.013087       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
	* I1109 21:52:28.013124       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	* I1109 21:52:28.013151       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	* I1109 21:52:28.057606       1 shared_informer.go:247] Caches are synced for tokens 
	* I1109 21:52:28.059763       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
	* I1109 21:52:28.059796       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	* I1109 21:52:28.059825       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	* I1109 21:52:28.060202       1 controllermanager.go:549] Started "csrsigning"
	* W1109 21:52:28.060227       1 controllermanager.go:541] Skipping "ephemeral-volume"
	* I1109 21:52:28.060423       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
	* I1109 21:52:28.060441       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	* I1109 21:52:28.060460       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
	* I1109 21:52:28.060473       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	* I1109 21:52:28.060473       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	* I1109 21:52:28.060539       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
	* I1109 21:52:28.101458       1 controllermanager.go:549] Started "namespace"
	* I1109 21:52:28.101604       1 namespace_controller.go:200] Starting namespace controller
	* I1109 21:52:28.101783       1 shared_informer.go:240] Waiting for caches to sync for namespace
	* I1109 21:52:28.111447       1 controllermanager.go:549] Started "tokencleaner"
	* I1109 21:52:28.111582       1 tokencleaner.go:118] Starting token cleaner controller
	* I1109 21:52:28.111599       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	* I1109 21:52:28.111607       1 shared_informer.go:247] Caches are synced for token_cleaner 
	* I1109 21:52:28.122981       1 node_ipam_controller.go:91] Sending events to api server.
	* 
	* ==> kube-proxy [6245c1849019] <==
	* I1109 21:51:19.614770       1 node.go:136] Successfully retrieved node IP: 192.168.59.16
	* I1109 21:51:19.614886       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.59.16), assume IPv4 operation
	* W1109 21:51:19.869621       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:51:19.869755       1 server_others.go:186] Using iptables Proxier.
	* I1109 21:51:19.870063       1 server.go:650] Version: v1.19.2
	* I1109 21:51:19.870647       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:51:19.870754       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:51:19.870824       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:51:19.871454       1 config.go:315] Starting service config controller
	* I1109 21:51:19.871474       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:51:19.873448       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:51:19.873665       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:51:19.971712       1 shared_informer.go:247] Caches are synced for service config 
	* I1109 21:51:19.974690       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-scheduler [4fd533242325] <==
	* I1109 21:52:17.963799       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:17.963875       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:19.260360       1 serving.go:331] Generated self-signed cert in-memory
	* W1109 21:52:25.534229       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1109 21:52:25.534285       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	* W1109 21:52:25.534298       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1109 21:52:25.534307       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1109 21:52:25.597204       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:25.597239       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:25.614305       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1109 21:52:25.614451       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:52:25.614462       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:52:25.614482       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1109 21:52:25.715195       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kube-scheduler [648b2e2d5a58] <==
	* E1109 21:50:51.871946       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:50:51.871986       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:50:51.871971       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:50:51.872138       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:50:51.872148       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:50:51.872228       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:51.872278       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:51.872289       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:50:51.872453       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:50:51.873090       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:51.874897       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:51.875063       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:50:51.875276       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:50:52.724384       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:52.741702       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:50:52.751251       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:50:52.790958       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:50:52.918732       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:52.979635       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:50:52.984350       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:53.029661       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:50:53.058668       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:50:53.058845       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:53.066266       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* I1109 21:50:54.668953       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:51:36 UTC, end at Mon 2020-11-09 21:52:36 UTC. --
	* Nov 09 21:52:33 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:33.783284    1149 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to set up pod "coredns-f9fd979d6-bbqqk_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to teardown pod "coredns-f9fd979d6-bbqqk_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.9 -j CNI-0c029b2751d805fec76757c2 -m comment --comment name: "crio" id: "79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64" -
-wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-0c029b2751d805fec76757c2':No such file or directory
	* Nov 09 21:52:33 newest-cni-20201109134950-342799 kubelet[1149]: Try `iptables -h' or 'iptables --help' for more information.
	* Nov 09 21:52:33 newest-cni-20201109134950-342799 kubelet[1149]: ]
	* Nov 09 21:52:33 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:33.783312    1149 kuberuntime_manager.go:730] createPodSandbox for pod "coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to set up pod "coredns-f9fd979d6-bbqqk_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to teardown pod "coredns-f9fd979d6-bbqqk_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.9 -j CNI-0c029b2751d805fec76757c2 -m comment --comment name: "crio" id: "79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64"
--wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-0c029b2751d805fec76757c2':No such file or directory
	* Nov 09 21:52:33 newest-cni-20201109134950-342799 kubelet[1149]: Try `iptables -h' or 'iptables --help' for more information.
	* Nov 09 21:52:33 newest-cni-20201109134950-342799 kubelet[1149]: ]
	* Nov 09 21:52:33 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:33.783428    1149 pod_workers.go:191] Error syncing pod f772a51c-bafe-4741-9eb4-9e9fb77abf94 ("coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)"), skipping: failed to "CreatePodSandbox" for "coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)\" failed: rpc error: code = Unknown desc = [failed to set up sandbox container \"79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64\" network for pod \"coredns-f9fd979d6-bbqqk\": networkPlugin cni failed to set up pod \"coredns-f9fd979d6-bbqqk_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64\" network for pod \"coredns-f9fd979d6-bbqqk\":
networkPlugin cni failed to teardown pod \"coredns-f9fd979d6-bbqqk_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.9 -j CNI-0c029b2751d805fec76757c2 -m comment --comment name: \"crio\" id: \"79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-0c029b2751d805fec76757c2':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	* Nov 09 21:52:33 newest-cni-20201109134950-342799 kubelet[1149]: W1109 21:52:33.784740    1149 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-bbqqk_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64"
	* Nov 09 21:52:34 newest-cni-20201109134950-342799 kubelet[1149]: W1109 21:52:34.819425    1149 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-bbqqk_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64"
	* Nov 09 21:52:34 newest-cni-20201109134950-342799 kubelet[1149]: W1109 21:52:34.860084    1149 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64"
	* Nov 09 21:52:34 newest-cni-20201109134950-342799 kubelet[1149]: W1109 21:52:34.860147    1149 pod_container_deletor.go:79] Container "79e62bd7520880bfc6cca7ca28359b02bf4346a3cf26fbf4a4afecdd428b5d64" not found in pod's containers
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:35.529980    1149 cni.go:366] Error adding kube-system_coredns-f9fd979d6-bbqqk/9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783 to network bridge/crio: failed to set bridge addr: could not add IP address to "cni0": permission denied
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:35.653988    1149 cni.go:387] Error deleting kube-system_coredns-f9fd979d6-bbqqk/9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783 from network bridge/crio: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-3af7a4b6984d97bbfde5c75c -m comment --comment name: "crio" id: "9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3af7a4b6984d97bbfde5c75c':No such file or directory
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: Try `iptables -h' or 'iptables --help' for more information.
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:35.910179    1149 remote_runtime.go:113] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to set up pod "coredns-f9fd979d6-bbqqk_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to teardown pod "coredns-f9fd979d6-bbqqk_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-3af7a4b6984d97bbfde5c75c -m comment --comment name: "crio" id: "9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target
`CNI-3af7a4b6984d97bbfde5c75c':No such file or directory
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: Try `iptables -h' or 'iptables --help' for more information.
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: ]
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:35.910255    1149 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to set up pod "coredns-f9fd979d6-bbqqk_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to teardown pod "coredns-f9fd979d6-bbqqk_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-3af7a4b6984d97bbfde5c75c -m comment --comment name: "crio" id: "9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783"
--wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3af7a4b6984d97bbfde5c75c':No such file or directory
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: Try `iptables -h' or 'iptables --help' for more information.
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: ]
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:35.910287    1149 kuberuntime_manager.go:730] createPodSandbox for pod "coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to set up pod "coredns-f9fd979d6-bbqqk_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to teardown pod "coredns-f9fd979d6-bbqqk_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-3af7a4b6984d97bbfde5c75c -m comment --comment name: "crio" id: "9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783"
--wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3af7a4b6984d97bbfde5c75c':No such file or directory
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: Try `iptables -h' or 'iptables --help' for more information.
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: ]
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:35.911622    1149 pod_workers.go:191] Error syncing pod f772a51c-bafe-4741-9eb4-9e9fb77abf94 ("coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)"), skipping: failed to "CreatePodSandbox" for "coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)\" failed: rpc error: code = Unknown desc = [failed to set up sandbox container \"9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783\" network for pod \"coredns-f9fd979d6-bbqqk\": networkPlugin cni failed to set up pod \"coredns-f9fd979d6-bbqqk_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783\" network for pod \"coredns-f9fd979d6-bbqqk\":
networkPlugin cni failed to teardown pod \"coredns-f9fd979d6-bbqqk_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.10 -j CNI-3af7a4b6984d97bbfde5c75c -m comment --comment name: \"crio\" id: \"9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3af7a4b6984d97bbfde5c75c':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 kubelet[1149]: W1109 21:52:35.914053    1149 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-bbqqk_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "9096b012523a31ed9ecb78d7b74b6e53d77e776877d3102ade4e4b9fcb07a783"
	* 
	* ==> storage-provisioner [6e239b637a0a] <==
	* I1109 21:51:20.972315       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1109 21:51:20.987497       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1109 21:51:20.988399       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c8283dac-1e32-45d2-9e94-91fa0c9c5f84", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20201109134950-342799_971d08a0-bcc6-4467-92d7-1b2d6c306976 became leader
	* I1109 21:51:20.988443       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20201109134950-342799_971d08a0-bcc6-4467-92d7-1b2d6c306976!
	* I1109 21:51:21.089040       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20201109134950-342799_971d08a0-bcc6-4467-92d7-1b2d6c306976!

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:52:35.344991  596676 out.go:286] unable to execute * 2020-11-09 21:51:07.037121 W | etcdserver: request "header:<ID:8039336204269454475 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" mod_revision:215 > success:<request_put:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" > >>" with result "size:16" took too long (286.834312ms) to execute
	: html/template:* 2020-11-09 21:51:07.037121 W | etcdserver: request "header:<ID:8039336204269454475 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" mod_revision:215 > success:<request_put:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" > >>" with result "size:16" took too long (286.834312ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:52:35.361636  596676 out.go:286] unable to execute * 2020-11-09 21:51:09.499332 W | etcdserver: request "header:<ID:8039336204269454480 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" value_size:763 lease:8039336204269454336 >> failure:<>>" with result "size:16" took too long (1.468106766s) to execute
	: html/template:* 2020-11-09 21:51:09.499332 W | etcdserver: request "header:<ID:8039336204269454480 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" value_size:763 lease:8039336204269454336 >> failure:<>>" with result "size:16" took too long (1.468106766s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
helpers_test.go:255: (dbg) Run:  kubectl --context newest-cni-20201109134950-342799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: coredns-f9fd979d6-bbqqk
helpers_test.go:263: ======> post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context newest-cni-20201109134950-342799 describe pod coredns-f9fd979d6-bbqqk
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context newest-cni-20201109134950-342799 describe pod coredns-f9fd979d6-bbqqk: exit status 1 (124.224919ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-f9fd979d6-bbqqk" not found

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context newest-cni-20201109134950-342799 describe pod coredns-f9fd979d6-bbqqk: exit status 1
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect newest-cni-20201109134950-342799
helpers_test.go:229: (dbg) docker inspect newest-cni-20201109134950-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24",
	        "Created": "2020-11-09T21:49:52.558857361Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 583306,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:51:35.661262577Z",
	            "FinishedAt": "2020-11-09T21:51:33.083357923Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24/hosts",
	        "LogPath": "/var/lib/docker/containers/4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24/4b41662211fbd0875da28ab47f86aecabab8700d409ea63dd534510142270c24-json.log",
	        "Name": "/newest-cni-20201109134950-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20201109134950-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20201109134950-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9186d985e1d5c956838acea77f1fa674eab7a4ac9591176e19509620636f3624-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9186d985e1d5c956838acea77f1fa674eab7a4ac9591176e19509620636f3624/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9186d985e1d5c956838acea77f1fa674eab7a4ac9591176e19509620636f3624/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9186d985e1d5c956838acea77f1fa674eab7a4ac9591176e19509620636f3624/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20201109134950-342799",
	                "Source": "/var/lib/docker/volumes/newest-cni-20201109134950-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20201109134950-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20201109134950-342799",
	                "name.minikube.sigs.k8s.io": "newest-cni-20201109134950-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c140784cf313cbb340d849a13db9f6d6d98a9b36d37dde8f2f25b655635f1ade",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c140784cf313",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20201109134950-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4b41662211fb"
	                    ],
	                    "NetworkID": "6796ea11076b994fb365bae5a339562eb0a3e499155be8b8ac885b173abd65a3",
	                    "EndpointID": "d9ef55aef290b39a723a9f4ef67c909ae32e2f13ce3fd3984feb4576ad350c88",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799
helpers_test.go:238: <<< TestStartStop/group/newest-cni/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20201109134950-342799 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-20201109134950-342799 logs -n 25: (3.299026031s)
helpers_test.go:246: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Mon 2020-11-09 21:51:36 UTC, end at Mon 2020-11-09 21:52:39 UTC. --
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.737680280Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.741731445Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.741772175Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.741832739Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.741854436Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 09 21:51:50 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:50.766277467Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Nov 09 21:51:53 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:53.940836641Z" level=warning msg="Your kernel does not support swap memory limit"
	* Nov 09 21:51:53 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:53.940888529Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Nov 09 21:51:53 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:53.940899394Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Nov 09 21:51:53 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:53.941126411Z" level=info msg="Loading containers: start."
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.206991008Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.395086621Z" level=info msg="Loading containers: done."
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.447172019Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.447345454Z" level=info msg="Daemon has completed initialization"
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.468143007Z" level=info msg="API listen on [::]:2376"
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:51:54.468199660Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 09 21:51:54 newest-cni-20201109134950-342799 systemd[1]: Started Docker Application Container Engine.
	* Nov 09 21:52:27 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:27.023208284Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:52:28 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:28.477824134Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:52:30 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:30.550508221Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:52:32 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:32.518922188Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:52:33 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:33.706343594Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:52:35 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:35.807009434Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:52:37 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:37.781235929Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Nov 09 21:52:38 newest-cni-20201109134950-342799 dockerd[517]: time="2020-11-09T21:52:38.957812472Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	* 2749b8d949aea       607331163122e       22 seconds ago       Running             kube-apiserver            1                   5665646506ce6
	* c714ee3191748       8603821e1a7a5       22 seconds ago       Running             kube-controller-manager   1                   5490212470de7
	* 945beac6a1937       0369cf4303ffd       22 seconds ago       Running             etcd                      1                   8109405ef81a8
	* 4fd5332423254       2f32d66b884f8       22 seconds ago       Running             kube-scheduler            1                   1186fd00c7d72
	* 6e239b637a0ae       bad58561c4be7       About a minute ago   Exited              storage-provisioner       0                   cae537ffa0a24
	* 6245c18490190       d373dd5a8593a       About a minute ago   Exited              kube-proxy                0                   39c7ae09d2835
	* 648b2e2d5a589       2f32d66b884f8       About a minute ago   Exited              kube-scheduler            0                   a8b4ca883191e
	* 742bb8f23b68a       8603821e1a7a5       About a minute ago   Exited              kube-controller-manager   0                   37cf43b6ffbd1
	* 524074355b73e       0369cf4303ffd       About a minute ago   Exited              etcd                      0                   99e66ece8710b
	* 32aa383f266ae       607331163122e       About a minute ago   Exited              kube-apiserver            0                   f6bd85a6b9e56
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20201109134950-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=newest-cni-20201109134950-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=newest-cni-20201109134950-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_50_55_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:50:51 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  newest-cni-20201109134950-342799
	*   AcquireTime:     <unset>
	*   RenewTime:       Mon, 09 Nov 2020 21:52:35 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:52:25 +0000   Mon, 09 Nov 2020 21:50:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:52:25 +0000   Mon, 09 Nov 2020 21:50:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:52:25 +0000   Mon, 09 Nov 2020 21:50:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:52:25 +0000   Mon, 09 Nov 2020 21:51:06 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.59.16
	*   Hostname:    newest-cni-20201109134950-342799
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 ca6d465f40c7494792e56eccadc1757c
	*   System UUID:                efe4ddc1-fe61-487b-92e1-189128bd4193
	*   Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.13
	*   Kubelet Version:            v1.19.2
	*   Kube-Proxy Version:         v1.19.2
	* PodCIDR:                      192.168.0.0/24
	* PodCIDRs:                     192.168.0.0/24
	* Non-terminated Pods:          (7 in total)
	*   Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-f9fd979d6-bbqqk                                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     82s
	*   kube-system                 etcd-newest-cni-20201109134950-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	*   kube-system                 kube-apiserver-newest-cni-20201109134950-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         103s
	*   kube-system                 kube-controller-manager-newest-cni-20201109134950-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         103s
	*   kube-system                 kube-proxy-r7l4h                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	*   kube-system                 kube-scheduler-newest-cni-20201109134950-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         103s
	*   kube-system                 storage-provisioner                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                650m (8%)  0 (0%)
	*   memory             70Mi (0%)  170Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                  From        Message
	*   ----    ------                   ----                 ----        -------
	*   Normal  NodeHasSufficientMemory  116s (x5 over 116s)  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    116s (x4 over 116s)  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     116s (x4 over 116s)  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 104s                 kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  104s                 kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    104s                 kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     104s                 kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             104s                 kubelet     Node newest-cni-20201109134950-342799 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  103s                 kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                93s                  kubelet     Node newest-cni-20201109134950-342799 status is now: NodeReady
	*   Normal  Starting                 80s                  kube-proxy  Starting kube-proxy.
	*   Normal  Starting                 41s                  kubelet     Starting kubelet.
	*   Normal  NodeAllocatableEnforced  41s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeHasSufficientPID     35s (x7 over 41s)    kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeHasSufficientMemory  34s (x8 over 41s)    kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    34s (x8 over 41s)    kubelet     Node newest-cni-20201109134950-342799 status is now: NodeHasNoDiskPressure
	* 
	* ==> dmesg <==
	* [  +0.152977] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 1e 8d 25 be 2f 51 08 06        ........%./Q..
	* [  +0.854747] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000003] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000002] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.276266] IPv4: martian source 10.85.0.9 from 10.85.0.9, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 66 5c 4c 17 93 ad 08 06        ......f\L.....
	* [  +1.739521] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.004131] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +2.395493] net_ratelimit: 2 callbacks suppressed
	* [  +0.000002] IPv4: martian source 10.85.0.11 from 10.85.0.11, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff ee 0d 5a ff 93 b5 08 06        ........Z.....
	* [  +1.177206] IPv4: martian source 10.85.0.12 from 10.85.0.12, on dev eth0
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 72 83 e2 84 53 c0 08 06        ......r...S...
	* [  +0.678855] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000002] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000002] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000001] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* 
	* ==> etcd [524074355b73] <==
	* 2020-11-09 21:50:45.198680 I | embed: ready to serve client requests
	* 2020-11-09 21:50:45.205944 I | embed: ready to serve client requests
	* 2020-11-09 21:50:45.206908 I | etcdserver: setting up the initial cluster version to 3.4
	* 2020-11-09 21:50:45.213683 I | embed: serving client requests on 192.168.59.16:2379
	* 2020-11-09 21:50:45.213851 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:50:45.219894 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-09 21:50:45.220003 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-09 21:51:03.409275 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:51:06.545455 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (2.000046567s) to execute
	* WARNING: 2020/11/09 21:51:06 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-11-09 21:51:06.706974 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	* 2020-11-09 21:51:06.749860 W | wal: sync duration of 2.402983421s, expected less than 1s
	* 2020-11-09 21:51:07.036548 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pvc-protection-controller\" " with result "range_response_count:1 size:218" took too long (2.686492399s) to execute
	* 2020-11-09 21:51:07.037121 W | etcdserver: request "header:<ID:8039336204269454475 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" mod_revision:215 > success:<request_put:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" > >>" with result "size:16" took too long (286.834312ms) to execute
	* 2020-11-09 21:51:08.030737 W | wal: sync duration of 1.010469087s, expected less than 1s
	* 2020-11-09 21:51:09.496453 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-newest-cni-20201109134950-342799\" " with result "range_response_count:1 size:4345" took too long (4.072505659s) to execute
	* 2020-11-09 21:51:09.498944 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (5.038542636s) to execute
	* 2020-11-09 21:51:09.499332 W | etcdserver: request "header:<ID:8039336204269454480 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" value_size:763 lease:8039336204269454336 >> failure:<>>" with result "size:16" took too long (1.468106766s) to execute
	* 2020-11-09 21:51:12.261279 W | wal: sync duration of 4.230351973s, expected less than 1s
	* 2020-11-09 21:51:12.263707 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/namespace-controller\" " with result "range_response_count:0 size:5" took too long (2.758477528s) to execute
	* 2020-11-09 21:51:12.271236 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (2.319642491s) to execute
	* 2020-11-09 21:51:15.707227 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:51:22.573681 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/11/09 21:51:22 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 2020-11-09 21:51:22.664764 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> etcd [945beac6a193] <==
	* 2020-11-09 21:52:18.146786 I | etcdserver: restarting member 47984c33979a6f91 in cluster 79741b01b410835d at commit index 426
	* raft2020/11/09 21:52:18 INFO: 47984c33979a6f91 switched to configuration voters=()
	* raft2020/11/09 21:52:18 INFO: 47984c33979a6f91 became follower at term 2
	* raft2020/11/09 21:52:18 INFO: newRaft 47984c33979a6f91 [peers: [], term: 2, commit: 426, applied: 0, lastindex: 426, lastterm: 2]
	* 2020-11-09 21:52:18.159522 W | auth: simple token is not cryptographically signed
	* 2020-11-09 21:52:18.227523 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* raft2020/11/09 21:52:18 INFO: 47984c33979a6f91 switched to configuration voters=(5158957157623426961)
	* 2020-11-09 21:52:18.271992 I | etcdserver/membership: added member 47984c33979a6f91 [https://192.168.59.16:2380] to cluster 79741b01b410835d
	* 2020-11-09 21:52:18.272110 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-09 21:52:18.272156 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-09 21:52:18.313894 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-09 21:52:18.314388 I | embed: listening for metrics on http://127.0.0.1:2381
	* 2020-11-09 21:52:18.314978 I | embed: listening for peers on 192.168.59.16:2380
	* raft2020/11/09 21:52:19 INFO: 47984c33979a6f91 is starting a new election at term 2
	* raft2020/11/09 21:52:19 INFO: 47984c33979a6f91 became candidate at term 3
	* raft2020/11/09 21:52:19 INFO: 47984c33979a6f91 received MsgVoteResp from 47984c33979a6f91 at term 3
	* raft2020/11/09 21:52:19 INFO: 47984c33979a6f91 became leader at term 3
	* raft2020/11/09 21:52:19 INFO: raft.node: 47984c33979a6f91 elected leader 47984c33979a6f91 at term 3
	* 2020-11-09 21:52:19.458516 I | etcdserver: published {Name:newest-cni-20201109134950-342799 ClientURLs:[https://192.168.59.16:2379]} to cluster 79741b01b410835d
	* 2020-11-09 21:52:19.458854 I | embed: ready to serve client requests
	* 2020-11-09 21:52:19.459341 I | embed: ready to serve client requests
	* 2020-11-09 21:52:19.461127 I | embed: serving client requests on 192.168.59.16:2379
	* 2020-11-09 21:52:19.461646 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:52:31.224496 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:52:32.079685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  21:52:40 up  1:35,  0 users,  load average: 14.12, 11.26, 9.05
	* Linux newest-cni-20201109134950-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [2749b8d949ae] <==
	* I1109 21:52:25.504893       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
	* I1109 21:52:25.505096       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	* I1109 21:52:25.505385       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	* I1109 21:52:25.504120       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	* I1109 21:52:25.505128       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	* I1109 21:52:25.506264       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	* I1109 21:52:25.504186       1 autoregister_controller.go:141] Starting autoregister controller
	* I1109 21:52:25.506605       1 cache.go:32] Waiting for caches to sync for autoregister controller
	* I1109 21:52:25.506149       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I1109 21:52:25.506161       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* E1109 21:52:25.604810       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	* I1109 21:52:25.604939       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1109 21:52:25.605215       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	* I1109 21:52:25.605868       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1109 21:52:25.607114       1 cache.go:39] Caches are synced for autoregister controller
	* I1109 21:52:25.607406       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	* I1109 21:52:25.658402       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* I1109 21:52:26.501142       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I1109 21:52:26.501184       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I1109 21:52:26.509829       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	* I1109 21:52:27.769821       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* I1109 21:52:27.796893       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I1109 21:52:27.884889       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1109 21:52:27.912638       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I1109 21:52:27.931744       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* 
	* ==> kube-apiserver [32aa383f266a] <==
	* W1109 21:51:31.748992       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.760214       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.795037       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.818232       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.839727       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.884704       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.898255       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:31.922828       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.002043       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.036997       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.090075       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.113242       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.137692       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.148991       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.156547       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.203769       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.231625       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.235764       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.283977       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.291888       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.301237       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.383096       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.479491       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.539080       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1109 21:51:32.558715       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kube-controller-manager [742bb8f23b68] <==
	* I1109 21:51:17.677620       1 shared_informer.go:247] Caches are synced for deployment 
	* I1109 21:51:17.677665       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	* I1109 21:51:17.685553       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 2"
	* E1109 21:51:17.687052       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"c74cd31b-91a8-4807-8b33-a0854ca4ea6a", ResourceVersion:"208", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740555454, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001cb87c0), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0xc001cb87e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001cb8800), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)
(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0000a6e00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v
1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cb8820), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersi
stentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001cb8840), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.
DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"",
ValueFrom:(*v1.EnvVarSource)(0xc001cb8880)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001c95380), Stdin:false, StdinOnce:false, TTY:false}},
EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00044b168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0009d6a80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConf
ig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000715f38)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00044b1b8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* I1109 21:51:17.695046       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-snvwn"
	* I1109 21:51:17.757561       1 shared_informer.go:247] Caches are synced for taint 
	* I1109 21:51:17.757772       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* I1109 21:51:17.757848       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* W1109 21:51:17.757901       1 node_lifecycle_controller.go:1044] Missing timestamp for Node newest-cni-20201109134950-342799. Assuming now as a timestamp.
	* I1109 21:51:17.757956       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	* I1109 21:51:17.758147       1 event.go:291] "Event occurred" object="newest-cni-20201109134950-342799" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20201109134950-342799 event: Registered Node newest-cni-20201109134950-342799 in Controller"
	* I1109 21:51:17.774853       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-bbqqk"
	* I1109 21:51:17.774900       1 shared_informer.go:247] Caches are synced for disruption 
	* I1109 21:51:17.775012       1 disruption.go:339] Sending events to api server.
	* I1109 21:51:17.775929       1 shared_informer.go:247] Caches are synced for attach detach 
	* I1109 21:51:17.778110       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:51:17.778834       1 shared_informer.go:247] Caches are synced for resource quota 
	* E1109 21:51:17.782415       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"c74cd31b-91a8-4807-8b33-a0854ca4ea6a", ResourceVersion:"340", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740555454, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0010cd1e0), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0xc0010cd240)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0010cd2a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0010cd300)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0010cd360), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistent
DiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00127e880), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.S
caleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0010cd3c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolum
eSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0010cd420), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBD
VolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf
", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0010cd4e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMes
sagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00171c180), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000e4af38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000cbe690), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operat
or:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000c6b30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000e4b018)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object h
as been modified; please apply your changes to the latest version and try again
	* I1109 21:51:17.824967       1 shared_informer.go:247] Caches are synced for ReplicationController 
	* I1109 21:51:17.867045       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:51:18.138641       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:51:18.138679       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:51:18.167466       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:51:18.459787       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-f9fd979d6 to 1"
	* I1109 21:51:18.475190       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-f9fd979d6-snvwn"
	* 
	* ==> kube-controller-manager [c714ee319174] <==
	* I1109 21:52:39.984614       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	* I1109 21:52:39.984656       1 resource_quota_monitor.go:303] QuotaMonitor running
	* I1109 21:52:39.998441       1 controllermanager.go:549] Started "replicaset"
	* I1109 21:52:39.998628       1 replica_set.go:182] Starting replicaset controller
	* I1109 21:52:39.998644       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	* I1109 21:52:40.006959       1 controllermanager.go:549] Started "statefulset"
	* I1109 21:52:40.007113       1 stateful_set.go:146] Starting stateful set controller
	* I1109 21:52:40.007128       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	* I1109 21:52:40.081229       1 controllermanager.go:549] Started "bootstrapsigner"
	* I1109 21:52:40.081285       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	* I1109 21:52:40.230445       1 node_lifecycle_controller.go:77] Sending events to api server
	* E1109 21:52:40.230506       1 core.go:230] failed to start cloud node lifecycle controller: no cloud provider provided
	* W1109 21:52:40.230519       1 controllermanager.go:541] Skipping "cloud-node-lifecycle"
	* I1109 21:52:40.530952       1 controllermanager.go:549] Started "garbagecollector"
	* I1109 21:52:40.531219       1 garbagecollector.go:128] Starting garbage collector controller
	* I1109 21:52:40.531240       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:52:40.531292       1 graph_builder.go:282] GraphBuilder running
	* I1109 21:52:40.681192       1 controllermanager.go:549] Started "csrapproving"
	* I1109 21:52:40.681279       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	* I1109 21:52:40.681290       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	* I1109 21:52:40.831711       1 controllermanager.go:549] Started "ttl"
	* W1109 21:52:40.831746       1 core.go:244] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	* I1109 21:52:40.831750       1 ttl_controller.go:118] Starting TTL controller
	* W1109 21:52:40.831755       1 controllermanager.go:541] Skipping "route"
	* I1109 21:52:40.831768       1 shared_informer.go:240] Waiting for caches to sync for TTL
	* 
	* ==> kube-proxy [6245c1849019] <==
	* I1109 21:51:19.614770       1 node.go:136] Successfully retrieved node IP: 192.168.59.16
	* I1109 21:51:19.614886       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.59.16), assume IPv4 operation
	* W1109 21:51:19.869621       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:51:19.869755       1 server_others.go:186] Using iptables Proxier.
	* I1109 21:51:19.870063       1 server.go:650] Version: v1.19.2
	* I1109 21:51:19.870647       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:51:19.870754       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:51:19.870824       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:51:19.871454       1 config.go:315] Starting service config controller
	* I1109 21:51:19.871474       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:51:19.873448       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:51:19.873665       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:51:19.971712       1 shared_informer.go:247] Caches are synced for service config 
	* I1109 21:51:19.974690       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-scheduler [4fd533242325] <==
	* I1109 21:52:17.963799       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:17.963875       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:19.260360       1 serving.go:331] Generated self-signed cert in-memory
	* W1109 21:52:25.534229       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1109 21:52:25.534285       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	* W1109 21:52:25.534298       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1109 21:52:25.534307       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1109 21:52:25.597204       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:25.597239       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:25.614305       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1109 21:52:25.614451       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:52:25.614462       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:52:25.614482       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1109 21:52:25.715195       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kube-scheduler [648b2e2d5a58] <==
	* E1109 21:50:51.871946       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:50:51.871986       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:50:51.871971       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:50:51.872138       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:50:51.872148       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:50:51.872228       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:51.872278       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:51.872289       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:50:51.872453       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:50:51.873090       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:51.874897       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:51.875063       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:50:51.875276       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:50:52.724384       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:52.741702       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:50:52.751251       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:50:52.790958       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:50:52.918732       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:52.979635       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:50:52.984350       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:53.029661       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:50:53.058668       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:50:53.058845       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:53.066266       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* I1109 21:50:54.668953       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:51:36 UTC, end at Mon 2020-11-09 21:52:41 UTC. --
	* Nov 09 21:52:40 newest-cni-20201109134950-342799 kubelet[1149]: W1109 21:52:40.087063    1149 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-bbqqk_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "8a854098767ce864a3c9938dadb611964a119f6672e6d4b39c8f5edf52ef719c"
	* Nov 09 21:52:40 newest-cni-20201109134950-342799 kubelet[1149]: W1109 21:52:40.096478    1149 pod_container_deletor.go:79] Container "8a854098767ce864a3c9938dadb611964a119f6672e6d4b39c8f5edf52ef719c" not found in pod's containers
	* Nov 09 21:52:40 newest-cni-20201109134950-342799 kubelet[1149]: W1109 21:52:40.098843    1149 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "8a854098767ce864a3c9938dadb611964a119f6672e6d4b39c8f5edf52ef719c"
	* Nov 09 21:52:40 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:40.689813    1149 cni.go:366] Error adding kube-system_coredns-f9fd979d6-bbqqk/ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a to network bridge/crio: failed to set bridge addr: could not add IP address to "cni0": permission denied
	* Nov 09 21:52:40 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:40.772315    1149 cni.go:387] Error deleting kube-system_coredns-f9fd979d6-bbqqk/ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a from network bridge/crio: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.13 -j CNI-d341e14039a17cc863d589a4 -m comment --comment name: "crio" id: "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-d341e14039a17cc863d589a4':No such file or directory
	* Nov 09 21:52:40 newest-cni-20201109134950-342799 kubelet[1149]: Try `iptables -h' or 'iptables --help' for more information.
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:41.031224    1149 remote_runtime.go:113] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to set up pod "coredns-f9fd979d6-bbqqk_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to teardown pod "coredns-f9fd979d6-bbqqk_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.13 -j CNI-d341e14039a17cc863d589a4 -m comment --comment name: "crio" id: "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target
`CNI-d341e14039a17cc863d589a4':No such file or directory
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: Try `iptables -h' or 'iptables --help' for more information.
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: ]
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:41.031309    1149 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to set up pod "coredns-f9fd979d6-bbqqk_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to teardown pod "coredns-f9fd979d6-bbqqk_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.13 -j CNI-d341e14039a17cc863d589a4 -m comment --comment name: "crio" id: "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a"
--wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-d341e14039a17cc863d589a4':No such file or directory
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: Try `iptables -h' or 'iptables --help' for more information.
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: ]
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:41.031331    1149 kuberuntime_manager.go:730] createPodSandbox for pod "coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to set up pod "coredns-f9fd979d6-bbqqk_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a" network for pod "coredns-f9fd979d6-bbqqk": networkPlugin cni failed to teardown pod "coredns-f9fd979d6-bbqqk_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.13 -j CNI-d341e14039a17cc863d589a4 -m comment --comment name: "crio" id: "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a"
--wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-d341e14039a17cc863d589a4':No such file or directory
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: Try `iptables -h' or 'iptables --help' for more information.
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: ]
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: E1109 21:52:41.031674    1149 pod_workers.go:191] Error syncing pod f772a51c-bafe-4741-9eb4-9e9fb77abf94 ("coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)"), skipping: failed to "CreatePodSandbox" for "coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-f9fd979d6-bbqqk_kube-system(f772a51c-bafe-4741-9eb4-9e9fb77abf94)\" failed: rpc error: code = Unknown desc = [failed to set up sandbox container \"ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a\" network for pod \"coredns-f9fd979d6-bbqqk\": networkPlugin cni failed to set up pod \"coredns-f9fd979d6-bbqqk_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a\" network for pod \"coredns-f9fd979d6-bbqqk\":
networkPlugin cni failed to teardown pod \"coredns-f9fd979d6-bbqqk_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.13 -j CNI-d341e14039a17cc863d589a4 -m comment --comment name: \"crio\" id: \"ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-d341e14039a17cc863d589a4':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: W1109 21:52:41.177408    1149 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-f9fd979d6-bbqqk_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a"
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: W1109 21:52:41.195214    1149 pod_container_deletor.go:79] Container "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a" not found in pod's containers
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: W1109 21:52:41.199612    1149 cni.go:333] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ba138e2d49e7d80c279ed7fb5ef2c0617d906cfeac7216ff6096dc631877133a"
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: I1109 21:52:41.371023    1149 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: I1109 21:52:41.382360    1149 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: I1109 21:52:41.480054    1149 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-sx48j" (UniqueName: "kubernetes.io/secret/79b37acd-be0f-4eae-b88a-78fec1df2953-kubernetes-dashboard-token-sx48j") pod "dashboard-metrics-scraper-c95fcf479-s8ln7" (UID: "79b37acd-be0f-4eae-b88a-78fec1df2953")
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: I1109 21:52:41.480159    1149 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/79b37acd-be0f-4eae-b88a-78fec1df2953-tmp-volume") pod "dashboard-metrics-scraper-c95fcf479-s8ln7" (UID: "79b37acd-be0f-4eae-b88a-78fec1df2953")
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: I1109 21:52:41.580544    1149 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/932ca6dc-fdc5-49da-8cbf-6edd0f9c5f29-tmp-volume") pod "kubernetes-dashboard-584f46694c-7drpr" (UID: "932ca6dc-fdc5-49da-8cbf-6edd0f9c5f29")
	* Nov 09 21:52:41 newest-cni-20201109134950-342799 kubelet[1149]: I1109 21:52:41.580628    1149 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-sx48j" (UniqueName: "kubernetes.io/secret/932ca6dc-fdc5-49da-8cbf-6edd0f9c5f29-kubernetes-dashboard-token-sx48j") pod "kubernetes-dashboard-584f46694c-7drpr" (UID: "932ca6dc-fdc5-49da-8cbf-6edd0f9c5f29")
	* 
	* ==> storage-provisioner [6e239b637a0a] <==
	* I1109 21:51:20.972315       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1109 21:51:20.987497       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1109 21:51:20.988399       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c8283dac-1e32-45d2-9e94-91fa0c9c5f84", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20201109134950-342799_971d08a0-bcc6-4467-92d7-1b2d6c306976 became leader
	* I1109 21:51:20.988443       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20201109134950-342799_971d08a0-bcc6-4467-92d7-1b2d6c306976!
	* I1109 21:51:21.089040       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20201109134950-342799_971d08a0-bcc6-4467-92d7-1b2d6c306976!

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:52:40.092647  598454 out.go:286] unable to execute * 2020-11-09 21:51:07.037121 W | etcdserver: request "header:<ID:8039336204269454475 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" mod_revision:215 > success:<request_put:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" > >>" with result "size:16" took too long (286.834312ms) to execute
	: html/template:* 2020-11-09 21:51:07.037121 W | etcdserver: request "header:<ID:8039336204269454475 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" mod_revision:215 > success:<request_put:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/newest-cni-20201109134950-342799\" > >>" with result "size:16" took too long (286.834312ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:52:40.109493  598454 out.go:286] unable to execute * 2020-11-09 21:51:09.499332 W | etcdserver: request "header:<ID:8039336204269454480 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" value_size:763 lease:8039336204269454336 >> failure:<>>" with result "size:16" took too long (1.468106766s) to execute
	: html/template:* 2020-11-09 21:51:09.499332 W | etcdserver: request "header:<ID:8039336204269454480 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-newest-cni-20201109134950-342799.1645f5675eaaaa2f\" value_size:763 lease:8039336204269454336 >> failure:<>>" with result "size:16" took too long (1.468106766s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799
helpers_test.go:255: (dbg) Run:  kubectl --context newest-cni-20201109134950-342799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: coredns-f9fd979d6-bbqqk dashboard-metrics-scraper-c95fcf479-s8ln7 kubernetes-dashboard-584f46694c-7drpr
helpers_test.go:263: ======> post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context newest-cni-20201109134950-342799 describe pod coredns-f9fd979d6-bbqqk dashboard-metrics-scraper-c95fcf479-s8ln7 kubernetes-dashboard-584f46694c-7drpr
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context newest-cni-20201109134950-342799 describe pod coredns-f9fd979d6-bbqqk dashboard-metrics-scraper-c95fcf479-s8ln7 kubernetes-dashboard-584f46694c-7drpr: exit status 1 (102.530674ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-f9fd979d6-bbqqk" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-c95fcf479-s8ln7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-584f46694c-7drpr" not found

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context newest-cni-20201109134950-342799 describe pod coredns-f9fd979d6-bbqqk dashboard-metrics-scraper-c95fcf479-s8ln7 kubernetes-dashboard-584f46694c-7drpr: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (10.08s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/VerifyKubernetesImages (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p containerd-20201109134931-342799 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: kindest/kindnetd:0.5.4
start_stop_delete_test.go:232: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: library/minikube-local-cache-test:functional-20201109132758-342799
start_stop_delete_test.go:232: v1.19.2 images mismatch (-want +got):
[]string{
- 	"docker.io/kubernetesui/dashboard:v2.0.3",
- 	"docker.io/kubernetesui/metrics-scraper:v1.0.4",
	"gcr.io/k8s-minikube/storage-provisioner:v3",
	"k8s.gcr.io/coredns:1.7.0",
	... // 4 identical elements
	"k8s.gcr.io/kube-scheduler:v1.19.2",
	"k8s.gcr.io/pause:3.2",
+ 	"kubernetesui/dashboard:v2.0.3",
+ 	"kubernetesui/metrics-scraper:v1.0.4",
}
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/containerd/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect containerd-20201109134931-342799
helpers_test.go:229: (dbg) docker inspect containerd-20201109134931-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500",
	        "Created": "2020-11-09T21:49:34.08306062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 587551,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:51:56.702777865Z",
	            "FinishedAt": "2020-11-09T21:51:54.068401138Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500/hosts",
	        "LogPath": "/var/lib/docker/containers/99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500/99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500-json.log",
	        "Name": "/containerd-20201109134931-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "containerd-20201109134931-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "containerd-20201109134931-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/26ca7d93cafe076415b98401787d6d825d1a2ffd3dcc5cc3a1fb34c684850325-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/26ca7d93cafe076415b98401787d6d825d1a2ffd3dcc5cc3a1fb34c684850325/merged",
	                "UpperDir": "/var/lib/docker/overlay2/26ca7d93cafe076415b98401787d6d825d1a2ffd3dcc5cc3a1fb34c684850325/diff",
	                "WorkDir": "/var/lib/docker/overlay2/26ca7d93cafe076415b98401787d6d825d1a2ffd3dcc5cc3a1fb34c684850325/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "containerd-20201109134931-342799",
	                "Source": "/var/lib/docker/volumes/containerd-20201109134931-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "containerd-20201109134931-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "containerd-20201109134931-342799",
	                "name.minikube.sigs.k8s.io": "containerd-20201109134931-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d4f89d45be1ce4f8723c177710b2e578889efaf7c38beb91d64c55795d7f8145",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d4f89d45be1c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "containerd-20201109134931-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.82.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "99b356bd45d0"
	                    ],
	                    "NetworkID": "fd09085a8fbbe4ca9d0ddc43b8c256383d5c5545ab16526640254961fac9996d",
	                    "EndpointID": "aee87ea6d2de2a507ad7645353bfaafc1b5289a12546659010a6cd0287ed9cb1",
	                    "Gateway": "192.168.82.1",
	                    "IPAddress": "192.168.82.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:52:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799
helpers_test.go:238: <<< TestStartStop/group/containerd/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/containerd/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p containerd-20201109134931-342799 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/VerifyKubernetesImages
helpers_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 -p containerd-20201109134931-342799 logs -n 25: exit status 110 (4.28452781s)

                                                
                                                
-- stdout --
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	* 7a1bc5021f7c8       503bc4b7440b9       13 seconds ago       Running             kubernetes-dashboard        0                   8d2e2e8430cf3
	* a4d50c8d630b0       86262685d9abb       13 seconds ago       Running             dashboard-metrics-scraper   0                   c3b71169346d1
	* 92583c67fa8d6       2186a1a396deb       27 seconds ago       Running             kindnet-cni                 1                   d5c07f06b371d
	* 3924359627f80       56cc512116c8f       28 seconds ago       Running             busybox                     1                   9eec8f16758d6
	* 39af8ef96ca06       bfe3a36ebd252       28 seconds ago       Running             coredns                     1                   7efc0377102c6
	* cd4b6c56fe2e5       d373dd5a8593a       28 seconds ago       Running             kube-proxy                  1                   07fa14c3bc61a
	* 0b79a09e2c8b5       bad58561c4be7       28 seconds ago       Running             storage-provisioner         2                   243ca5b8b332f
	* 04f2d9b343c9e       8603821e1a7a5       37 seconds ago       Running             kube-controller-manager     2                   9cdd5c9661d17
	* 2d2d50c55165e       2f32d66b884f8       37 seconds ago       Running             kube-scheduler              1                   0557d0b4588f2
	* 5a2505cd4c0a6       0369cf4303ffd       37 seconds ago       Running             etcd                        1                   67c6650d5b81d
	* 5e6b4b76b9302       607331163122e       37 seconds ago       Running             kube-apiserver              1                   d68b763d44a25
	* dd8d6ea4cc99e       56cc512116c8f       About a minute ago   Exited              busybox                     0                   37850beb46002
	* 7e611def4558b       bad58561c4be7       About a minute ago   Exited              storage-provisioner         1                   b8c8c7728ffc7
	* c580c7a0a7eda       bfe3a36ebd252       About a minute ago   Exited              coredns                     0                   4c4ab51d87985
	* 36e8ddd26ca0d       2186a1a396deb       2 minutes ago        Exited              kindnet-cni                 0                   34f79141a007f
	* 44e21e64db01d       d373dd5a8593a       2 minutes ago        Exited              kube-proxy                  0                   a32a8c3cf3abe
	* ac1933ccabefb       8603821e1a7a5       2 minutes ago        Exited              kube-controller-manager     1                   ea53ef91fd5f6
	* 890ee7cd37088       0369cf4303ffd       3 minutes ago        Exited              etcd                        0                   b92e947b069e9
	* d88b553934b0c       607331163122e       3 minutes ago        Exited              kube-apiserver              0                   161d601f0234c
	* 4e1008484dc44       2f32d66b884f8       3 minutes ago        Exited              kube-scheduler              0                   1d4dd1f561cfa
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2020-11-09 21:52:09 UTC, end at Mon 2020-11-09 21:52:59 UTC. --
	* Nov 09 21:52:31 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:31.867438244Z" level=info msg="CreateContainer within sandbox \"d5c07f06b371dc50f6012767ef66c21d3756ed7546d33fb6dd56bf0c0f40bfb3\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	* Nov 09 21:52:31 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:31.969515027Z" level=info msg="CreateContainer within sandbox \"d5c07f06b371dc50f6012767ef66c21d3756ed7546d33fb6dd56bf0c0f40bfb3\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"92583c67fa8d6fecff98f043857b04bf38846c33560f0d9508ab7bde2755e89a\""
	* Nov 09 21:52:31 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:31.970517531Z" level=info msg="StartContainer for \"92583c67fa8d6fecff98f043857b04bf38846c33560f0d9508ab7bde2755e89a\""
	* Nov 09 21:52:31 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:31.995022294Z" level=info msg="shim containerd-shim started" address=/containerd-shim/8c988bac89e4289a44885e356e6983b59c6851e5d97ca2118a3b3c70df44fdcd.sock debug=false pid=2261
	* Nov 09 21:52:32 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:32.078318703Z" level=info msg="StartContainer for \"39af8ef96ca06df7744beff16ef4e9bd7adef7e9bbed662c7825f74f25c777a6\" returns successfully"
	* Nov 09 21:52:32 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:32.091517859Z" level=info msg="StartContainer for \"3924359627f804d5ab368f774969816aec4846b187ee4b998e1f64ed4a154d22\" returns successfully"
	* Nov 09 21:52:32 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:32.529998664Z" level=info msg="StartContainer for \"92583c67fa8d6fecff98f043857b04bf38846c33560f0d9508ab7bde2755e89a\" returns successfully"
	* Nov 09 21:52:35 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:35.426192777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20201109132758-342799,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	* Nov 09 21:52:37 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:37.659289142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20201109132758-342799,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.309553110Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-c95fcf479-gzdqv,Uid:fdf79f9c-981e-4e69-b0f4-ceb8026a763c,Namespace:kubernetes-dashboard,Attempt:0,}"
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.394661175Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-584f46694c-p6z7f,Uid:3f8065b2-207e-42f7-bb41-d677815e9f47,Namespace:kubernetes-dashboard,Attempt:0,}"
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.479064166Z" level=info msg="shim containerd-shim started" address=/containerd-shim/de847b3eec7e5a64c08a3bc2059e62cd9b011099fe47bc234126dcd5a5a8e846.sock debug=false pid=2832
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.499763776Z" level=info msg="shim containerd-shim started" address=/containerd-shim/fab50136094f10da5f17d9775baddf14fb95c6944677b87d5f8278d0c12d57c4.sock debug=false pid=2848
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.685951891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-c95fcf479-gzdqv,Uid:fdf79f9c-981e-4e69-b0f4-ceb8026a763c,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"c3b71169346d192063830adda791c38eddb29178f50b7e591c8f07ffb0c788af\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.690103043Z" level=info msg="CreateContainer within sandbox \"c3b71169346d192063830adda791c38eddb29178f50b7e591c8f07ffb0c788af\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.711721810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-584f46694c-p6z7f,Uid:3f8065b2-207e-42f7-bb41-d677815e9f47,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"8d2e2e8430cf3faf315b77b96636f30aa7d65652820862c3c94be6a8fcb59b5d\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.715175638Z" level=info msg="CreateContainer within sandbox \"8d2e2e8430cf3faf315b77b96636f30aa7d65652820862c3c94be6a8fcb59b5d\" for container &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,}"
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.735884788Z" level=info msg="CreateContainer within sandbox \"c3b71169346d192063830adda791c38eddb29178f50b7e591c8f07ffb0c788af\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"a4d50c8d630b06284ac69a135e80e1f102f253ea79155c9b2a8f8779768de182\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.736448911Z" level=info msg="StartContainer for \"a4d50c8d630b06284ac69a135e80e1f102f253ea79155c9b2a8f8779768de182\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.737449713Z" level=info msg="shim containerd-shim started" address=/containerd-shim/3a9504fb22bff05b2ba7d6111bd9302695ad5c4f8211c3ddb8782741b2f39f75.sock debug=false pid=2894
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.777554092Z" level=info msg="CreateContainer within sandbox \"8d2e2e8430cf3faf315b77b96636f30aa7d65652820862c3c94be6a8fcb59b5d\" for &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,} returns container id \"7a1bc5021f7c88e0e6eda72bf53ff7e40108f3887b7e44577a823a37023c5469\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.780808460Z" level=info msg="StartContainer for \"7a1bc5021f7c88e0e6eda72bf53ff7e40108f3887b7e44577a823a37023c5469\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.784001391Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1f47bb0a4719e0e463bd0b9d7f05e1fc7e0a971e2b154138596415c2aceff8c9.sock debug=false pid=2911
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.969007671Z" level=info msg="StartContainer for \"a4d50c8d630b06284ac69a135e80e1f102f253ea79155c9b2a8f8779768de182\" returns successfully"
	* Nov 09 21:52:47 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:47.066173099Z" level=info msg="StartContainer for \"7a1bc5021f7c88e0e6eda72bf53ff7e40108f3887b7e44577a823a37023c5469\" returns successfully"
	* 
	* ==> coredns [39af8ef96ca06df7744beff16ef4e9bd7adef7e9bbed662c7825f74f25c777a6] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* 
	* ==> coredns [c580c7a0a7eda9845120a0b4adae01794159dea866c8c3d1c4a453ad8f686c81] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               containerd-20201109134931-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=containerd-20201109134931-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=containerd-20201109134931-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_50_28_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:50:03 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  containerd-20201109134931-342799
	*   AcquireTime:     <unset>
	*   RenewTime:       Mon, 09 Nov 2020 21:52:50 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:52:30 +0000   Mon, 09 Nov 2020 21:49:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:52:30 +0000   Mon, 09 Nov 2020 21:49:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:52:30 +0000   Mon, 09 Nov 2020 21:49:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:52:30 +0000   Mon, 09 Nov 2020 21:50:43 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.82.16
	*   Hostname:    containerd-20201109134931-342799
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 40320265c5284752a13c6ca8ef6f6de3
	*   System UUID:                0618ea8e-43e2-48d6-84fd-daea6cb5b020
	*   Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  containerd://1.3.7
	*   Kubelet Version:            v1.19.2
	*   Kube-Proxy Version:         v1.19.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (11 in total)
	*   Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	*   default                     busybox                                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	*   kube-system                 coredns-f9fd979d6-dqwvt                                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m18s
	*   kube-system                 etcd-containerd-20201109134931-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	*   kube-system                 kindnet-gvs6z                                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m17s
	*   kube-system                 kube-apiserver-containerd-20201109134931-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	*   kube-system                 kube-controller-manager-containerd-20201109134931-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m36s
	*   kube-system                 kube-proxy-chp96                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	*   kube-system                 kube-scheduler-containerd-20201109134931-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	*   kube-system                 storage-provisioner                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	*   kubernetes-dashboard        dashboard-metrics-scraper-c95fcf479-gzdqv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	*   kubernetes-dashboard        kubernetes-dashboard-584f46694c-p6z7f                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (9%)   100m (1%)
	*   memory             120Mi (0%)  220Mi (0%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                  From        Message
	*   ----    ------                   ----                 ----        -------
	*   Normal  NodeHasSufficientMemory  3m6s (x5 over 3m6s)  kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    3m6s (x5 over 3m6s)  kubelet     Node containerd-20201109134931-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     3m6s (x5 over 3m6s)  kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 2m27s                kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  2m27s                kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2m27s                kubelet     Node containerd-20201109134931-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2m27s                kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             2m27s                kubelet     Node containerd-20201109134931-342799 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  2m27s                kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                2m17s                kubelet     Node containerd-20201109134931-342799 status is now: NodeReady
	*   Normal  Starting                 2m12s                kube-proxy  Starting kube-proxy.
	*   Normal  Starting                 39s                  kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  39s (x8 over 39s)    kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    39s (x8 over 39s)    kubelet     Node containerd-20201109134931-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     39s (x7 over 39s)    kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  39s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 28s                  kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.678855] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000002] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000002] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000001] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +1.273129] IPv4: martian source 10.85.0.13 from 10.85.0.13, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff f6 e9 8a 0e 0a 19 08 06        ..............
	* [  +1.464078] IPv4: martian source 10.85.0.14 from 10.85.0.14, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 a7 bc 25 52 eb 08 06        ......&..%R...
	* [  +0.438582] IPv4: martian source 10.85.0.15 from 10.85.0.15, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5e 56 30 8c 94 40 08 06        ......^V0..@..
	* [  +0.000728] IPv4: martian source 10.85.0.16 from 10.85.0.16, on dev eth0
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 32 a4 3e fc 48 72 08 06        ......2.>.Hr..
	* [  +3.860872] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vethd3a877be
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 9e ad 56 03 1d d4 08 06        ........V.....
	* [  +0.035505] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethc290fae4
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 02 1c 06 fe cb 70 08 06        ...........p..
	* [  +1.118377] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000001] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.003991] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000005] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +6.920951] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [5a2505cd4c0a6014393c7edf39ca5d91c3d3c4d573173cc73a81f861f1a2d9de] <==
	* raft2020/11/09 21:52:22 INFO: newRaft 9364466d61f1ec2b [peers: [], term: 2, commit: 492, applied: 0, lastindex: 492, lastterm: 2]
	* 2020-11-09 21:52:22.775348 W | auth: simple token is not cryptographically signed
	* 2020-11-09 21:52:22.784740 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* raft2020/11/09 21:52:22 INFO: 9364466d61f1ec2b switched to configuration voters=(10620691256855096363)
	* 2020-11-09 21:52:22.785441 I | etcdserver/membership: added member 9364466d61f1ec2b [https://192.168.82.16:2380] to cluster ce8177bd8a545254
	* 2020-11-09 21:52:22.785599 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-09 21:52:22.785647 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-09 21:52:22.790154 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-09 21:52:22.790607 I | embed: listening for peers on 192.168.82.16:2380
	* 2020-11-09 21:52:22.791204 I | embed: listening for metrics on http://127.0.0.1:2381
	* raft2020/11/09 21:52:23 INFO: 9364466d61f1ec2b is starting a new election at term 2
	* raft2020/11/09 21:52:23 INFO: 9364466d61f1ec2b became candidate at term 3
	* raft2020/11/09 21:52:23 INFO: 9364466d61f1ec2b received MsgVoteResp from 9364466d61f1ec2b at term 3
	* raft2020/11/09 21:52:23 INFO: 9364466d61f1ec2b became leader at term 3
	* raft2020/11/09 21:52:23 INFO: raft.node: 9364466d61f1ec2b elected leader 9364466d61f1ec2b at term 3
	* 2020-11-09 21:52:23.975832 I | etcdserver: published {Name:containerd-20201109134931-342799 ClientURLs:[https://192.168.82.16:2379]} to cluster ce8177bd8a545254
	* 2020-11-09 21:52:23.975862 I | embed: ready to serve client requests
	* 2020-11-09 21:52:23.978080 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:52:23.978205 I | embed: ready to serve client requests
	* 2020-11-09 21:52:23.979804 I | embed: serving client requests on 192.168.82.16:2379
	* 2020-11-09 21:52:39.737697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:52:44.977511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:52:46.103289 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/default\" " with result "range_response_count:1 size:199" took too long (117.15839ms) to execute
	* 2020-11-09 21:52:46.122274 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-gzdqv\" " with result "range_response_count:1 size:2903" took too long (116.560946ms) to execute
	* 2020-11-09 21:52:54.977470 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> etcd [890ee7cd370886d799eb9eda7907865382550e8cd7dbe836988ae1cb52c94260] <==
	* 2020-11-09 21:50:24.824979 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (2.937568782s) to execute
	* 2020-11-09 21:50:24.825574 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/containerd-20201109134931-342799\" " with result "range_response_count:0 size:4" took too long (2.79013846s) to execute
	* 2020-11-09 21:50:24.834807 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/kube-system/\" range_end:\"/registry/resourcequotas/kube-system0\" " with result "range_response_count:0 size:4" took too long (299.83796ms) to execute
	* 2020-11-09 21:50:25.358074 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:50:38.791161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:50:40.177051 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:50:50.177392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:51:00.177687 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:51:07.441009 W | etcdserver: request "header:<ID:17017825011373359164 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.82.16\" mod_revision:422 > success:<request_put:<key:\"/registry/masterleases/192.168.82.16\" value_size:68 lease:7794452974518583354 >> failure:<request_range:<key:\"/registry/masterleases/192.168.82.16\" > >>" with result "size:16" took too long (417.905089ms) to execute
	* 2020-11-09 21:51:07.441184 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (859.114232ms) to execute
	* 2020-11-09 21:51:11.177003 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	* 2020-11-09 21:51:11.973210 W | wal: sync duration of 2.382323111s, expected less than 1s
	* 2020-11-09 21:51:12.257129 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (4.673826472s) to execute
	* 2020-11-09 21:51:12.257170 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5690" took too long (2.780789521s) to execute
	* 2020-11-09 21:51:12.257187 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (2.668500524s) to execute
	* 2020-11-09 21:51:12.257259 W | etcdserver: read-only range request "key:\"/registry/minions\" range_end:\"/registry/miniont\" count_only:true " with result "range_response_count:0 size:7" took too long (3.107011099s) to execute
	* 2020-11-09 21:51:12.257369 W | etcdserver: request "header:<ID:17017825011373359175 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" value_size:765 lease:7794452974518582788 >> failure:<>>" with result "size:16" took too long (2.666328699s) to execute
	* 2020-11-09 21:51:12.257785 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:8 size:40946" took too long (2.362578565s) to execute
	* 2020-11-09 21:51:12.260936 W | etcdserver: read-only range request "key:\"/registry/services/endpoints\" range_end:\"/registry/services/endpointt\" count_only:true " with result "range_response_count:0 size:7" took too long (2.306538469s) to execute
	* 2020-11-09 21:51:12.261450 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (303.30854ms) to execute
	* 2020-11-09 21:51:12.261825 W | etcdserver: read-only range request "key:\"/registry/limitranges\" range_end:\"/registry/limitranget\" count_only:true " with result "range_response_count:0 size:5" took too long (418.97084ms) to execute
	* 2020-11-09 21:51:12.262091 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (666.65172ms) to execute
	* 2020-11-09 21:51:12.262355 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:7" took too long (1.558202852s) to execute
	* 2020-11-09 21:51:12.262748 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions\" range_end:\"/registry/controllerrevisiont\" count_only:true " with result "range_response_count:0 size:7" took too long (1.922770924s) to execute
	* 2020-11-09 21:51:20.177099 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  21:53:00 up  1:35,  0 users,  load average: 12.37, 11.07, 9.04
	* Linux containerd-20201109134931-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [5e6b4b76b93026c5d4e92fb298546ae77346769034d7b2e7d90d097b399894ef] <==
	* I1109 21:52:30.235164       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* I1109 21:52:30.235686       1 naming_controller.go:291] Starting NamingConditionController
	* I1109 21:52:30.235711       1 controller.go:86] Starting OpenAPI controller
	* I1109 21:52:30.235733       1 establishing_controller.go:76] Starting EstablishingController
	* I1109 21:52:30.234603       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
	* I1109 21:52:30.255449       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1109 21:52:30.292958       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* I1109 21:52:30.358000       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1109 21:52:30.358015       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1109 21:52:30.358073       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	* I1109 21:52:30.358644       1 cache.go:39] Caches are synced for autoregister controller
	* I1109 21:52:30.358690       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	* E1109 21:52:30.366219       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	* I1109 21:52:31.233439       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I1109 21:52:31.233480       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I1109 21:52:31.240733       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	* I1109 21:52:32.779905       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1109 21:52:33.039559       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* I1109 21:52:33.073516       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I1109 21:52:33.179897       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I1109 21:52:33.193871       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* I1109 21:52:45.896951       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	* I1109 21:52:45.964862       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	* I1109 21:52:45.974985       1 controller.go:606] quota admission added evaluator for: endpoints
	* I1109 21:52:45.976842       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	* 
	* ==> kube-apiserver [d88b553934b0cc487ef83effd6c68709f81a7aa13f7c2068771f4ba1e4e3ee7b] <==
	* I1109 21:50:27.964860       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1109 21:50:33.957905       1 client.go:360] parsed scheme: "passthrough"
	* I1109 21:50:33.957977       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I1109 21:50:33.957991       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I1109 21:50:42.913510       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	* I1109 21:50:43.140284       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	* I1109 21:51:07.441885       1 trace.go:205] Trace[2092517085]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (09-Nov-2020 21:51:06.280) (total time: 1161ms):
	* Trace[2092517085]: ---"Transaction committed" 1159ms (21:51:00.441)
	* Trace[2092517085]: [1.161073772s] [1.161073772s] END
	* I1109 21:51:12.258016       1 trace.go:205] Trace[2012452681]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (09-Nov-2020 21:51:09.475) (total time: 2782ms):
	* Trace[2012452681]: [2.782258557s] [2.782258557s] END
	* I1109 21:51:12.258394       1 trace.go:205] Trace[783842984]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.19.2 (linux/amd64) kubernetes/f574309,client:192.168.82.16 (09-Nov-2020 21:51:09.589) (total time: 2668ms):
	* Trace[783842984]: ---"Object stored in database" 2668ms (21:51:00.258)
	* Trace[783842984]: [2.668644744s] [2.668644744s] END
	* I1109 21:51:12.258571       1 trace.go:205] Trace[227814412]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.82.16 (09-Nov-2020 21:51:09.475) (total time: 2782ms):
	* Trace[227814412]: ---"Listing from storage done" 2782ms (21:51:00.258)
	* Trace[227814412]: [2.782849278s] [2.782849278s] END
	* I1109 21:51:12.263134       1 trace.go:205] Trace[812533960]: "List etcd3" key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: (09-Nov-2020 21:51:09.894) (total time: 2368ms):
	* Trace[812533960]: [2.368476609s] [2.368476609s] END
	* I1109 21:51:12.264808       1 trace.go:205] Trace[868845683]: "List" url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.82.1 (09-Nov-2020 21:51:09.894) (total time: 2370ms):
	* Trace[868845683]: ---"Listing from storage done" 2368ms (21:51:00.263)
	* Trace[868845683]: [2.370162076s] [2.370162076s] END
	* I1109 21:51:17.980362       1 client.go:360] parsed scheme: "passthrough"
	* I1109 21:51:17.980434       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I1109 21:51:17.980490       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [04f2d9b343c9e28b1a6ac5c56a9859008e396b7bce3b769554a2f6931bf3d572] <==
	* I1109 21:52:45.957553       1 shared_informer.go:247] Caches are synced for service account 
	* I1109 21:52:45.957585       1 shared_informer.go:247] Caches are synced for TTL 
	* I1109 21:52:45.957636       1 range_allocator.go:172] Starting range CIDR allocator
	* I1109 21:52:45.957697       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	* I1109 21:52:45.957716       1 shared_informer.go:247] Caches are synced for cidrallocator 
	* I1109 21:52:45.961906       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	* I1109 21:52:45.965718       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c95fcf479-gzdqv"
	* I1109 21:52:45.967673       1 shared_informer.go:247] Caches are synced for namespace 
	* I1109 21:52:45.972293       1 shared_informer.go:247] Caches are synced for endpoint 
	* I1109 21:52:45.983863       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-584f46694c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-584f46694c-p6z7f"
	* I1109 21:52:46.061032       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I1109 21:52:46.065442       1 shared_informer.go:247] Caches are synced for daemon sets 
	* I1109 21:52:46.081008       1 shared_informer.go:247] Caches are synced for stateful set 
	* I1109 21:52:46.099396       1 shared_informer.go:247] Caches are synced for disruption 
	* I1109 21:52:46.099431       1 disruption.go:339] Sending events to api server.
	* I1109 21:52:46.101267       1 shared_informer.go:247] Caches are synced for attach detach 
	* I1109 21:52:46.105022       1 shared_informer.go:247] Caches are synced for expand 
	* I1109 21:52:46.133448       1 shared_informer.go:247] Caches are synced for PVC protection 
	* I1109 21:52:46.141181       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:52:46.143327       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:52:46.182306       1 shared_informer.go:247] Caches are synced for ReplicationController 
	* I1109 21:52:46.214978       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:52:46.469164       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:52:46.469201       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:52:46.516645       1 shared_informer.go:247] Caches are synced for garbage collector 
	* 
	* ==> kube-controller-manager [ac1933ccabefbbff7c4695d3fe105ac2d2f27673a9ebab9320975cbad7e53eed] <==
	* I1109 21:50:43.100323       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-containerd-20201109134931-342799" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I1109 21:50:43.128990       1 shared_informer.go:247] Caches are synced for daemon sets 
	* I1109 21:50:43.136047       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	* I1109 21:50:43.136078       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	* I1109 21:50:43.172701       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gvs6z"
	* I1109 21:50:43.175364       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I1109 21:50:43.176059       1 shared_informer.go:247] Caches are synced for attach detach 
	* I1109 21:50:43.176566       1 shared_informer.go:247] Caches are synced for GC 
	* I1109 21:50:43.176687       1 shared_informer.go:247] Caches are synced for TTL 
	* I1109 21:50:43.181678       1 shared_informer.go:247] Caches are synced for node 
	* I1109 21:50:43.181733       1 range_allocator.go:172] Starting range CIDR allocator
	* I1109 21:50:43.181739       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	* I1109 21:50:43.181745       1 shared_informer.go:247] Caches are synced for cidrallocator 
	* E1109 21:50:43.191133       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	* I1109 21:50:43.203091       1 range_allocator.go:373] Set node containerd-20201109134931-342799 PodCIDR to [10.244.0.0/24]
	* I1109 21:50:43.218781       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:50:43.219355       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-chp96"
	* E1109 21:50:43.228453       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"e0cd4310-7660-47d0-85e2-31d5e05c4ff2", ResourceVersion:"242", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740555428, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-syste
m\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:0.5.4\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"e
ffect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0019074a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019074c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019074e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]str
ing{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001907500), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil
), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001907520), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001907540), Empt
yDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil
), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:0.5.4", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001907560)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019075a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00160f020), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ef4058), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000417490), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0011581c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000ef40c0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	* E1109 21:50:43.264108       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"fa2dde10-2899-496f-b231-4526182b2289", ResourceVersion:"225", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740555427, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001907380), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0xc0019073a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019073c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)
(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001234180), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v
1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0019073e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersi
stentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001907400), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.
DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"",
ValueFrom:(*v1.EnvVarSource)(0xc001907440)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00160efc0), Stdin:false, StdinOnce:false, TTY:false}},
EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000d7bdf8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004173b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConf
ig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0011581c0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000d7be48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* I1109 21:50:43.519030       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:50:43.524821       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:50:43.524878       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:50:43.694917       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-f9fd979d6 to 1"
	* I1109 21:50:43.716567       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-f9fd979d6-95r2l"
	* I1109 21:50:48.088846       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* 
	* ==> kube-proxy [44e21e64db01dc07094e6d9988eb6874df124a8a86c97d7224ae6bac80bbfec6] <==
	* I1109 21:50:48.819576       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:50:48.819800       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:50:48.988907       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:50:48.989014       1 server_others.go:186] Using iptables Proxier.
	* I1109 21:50:48.989410       1 server.go:650] Version: v1.19.2
	* I1109 21:50:48.990188       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:50:48.990806       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:50:48.990874       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:50:48.991086       1 config.go:315] Starting service config controller
	* I1109 21:50:48.991100       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:50:48.991220       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:50:48.991273       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:50:49.091358       1 shared_informer.go:247] Caches are synced for service config 
	* I1109 21:50:49.091376       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-proxy [cd4b6c56fe2e5480b780d5e22d95bc29c9d6df74f20a127c58a3ea9657970095] <==
	* I1109 21:52:31.969248       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:52:31.969555       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:52:32.119708       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:52:32.119815       1 server_others.go:186] Using iptables Proxier.
	* I1109 21:52:32.120142       1 server.go:650] Version: v1.19.2
	* I1109 21:52:32.120808       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:52:32.120938       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:52:32.120991       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:52:32.121742       1 config.go:315] Starting service config controller
	* I1109 21:52:32.121752       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:52:32.121787       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:52:32.121792       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:52:32.222090       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I1109 21:52:32.222125       1 shared_informer.go:247] Caches are synced for service config 
	* 
	* ==> kube-scheduler [2d2d50c55165ec543eecaee424348d515792f6855823d31259bf4a6adf2eab1a] <==
	* I1109 21:52:22.974805       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:22.974895       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:24.610180       1 serving.go:331] Generated self-signed cert in-memory
	* W1109 21:52:30.259422       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1109 21:52:30.259461       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W1109 21:52:30.259501       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1109 21:52:30.259510       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1109 21:52:30.388725       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:30.388751       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:30.393343       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:52:30.393387       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:52:30.393747       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1109 21:52:30.393811       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1109 21:52:30.493717       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kube-scheduler [4e1008484dc44d16f3659d0cce2eeb98c2f0e2af29fdfe7722eb1cb86d7d7a6d] <==
	* E1109 21:50:10.716945       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:11.097929       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:11.130303       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:50:11.322939       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:50:12.204078       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:50:12.633512       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:12.892331       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:50:13.125423       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:50:13.462016       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:50:13.466241       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:50:13.577457       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:50:17.881755       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:50:18.728946       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:50:18.979164       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:19.064843       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:50:19.500414       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:19.668754       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:20.397004       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:50:21.403896       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:23.033702       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:50:23.260442       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:50:23.283191       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:50:23.694181       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:50:25.358785       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* I1109 21:50:42.959397       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:52:09 UTC, end at Mon 2020-11-09 21:53:03 UTC. --
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.466330     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-sp4dk" (UniqueName: "kubernetes.io/secret/f3f389ed-b66d-493e-9437-e28c846398fc-coredns-token-sp4dk") pod "coredns-f9fd979d6-dqwvt" (UID: "f3f389ed-b66d-493e-9437-e28c846398fc")
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.466354     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9f178faa-40cb-4f8b-8b92-4812dc53ac67-kube-proxy") pod "kube-proxy-chp96" (UID: "9f178faa-40cb-4f8b-8b92-4812dc53ac67")
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.466386     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-bzgzs" (UniqueName: "kubernetes.io/secret/9f178faa-40cb-4f8b-8b92-4812dc53ac67-kube-proxy-token-bzgzs") pod "kube-proxy-chp96" (UID: "9f178faa-40cb-4f8b-8b92-4812dc53ac67")
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.469078     824 kubelet_node_status.go:108] Node containerd-20201109134931-342799 was previously registered
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.469432     824 kubelet_node_status.go:73] Successfully registered node containerd-20201109134931-342799
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.567124     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/633ad387-6ebd-43f2-84c7-9a0535dbc678-tmp") pod "storage-provisioner" (UID: "633ad387-6ebd-43f2-84c7-9a0535dbc678")
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.567175     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-vdcr7" (UniqueName: "kubernetes.io/secret/633ad387-6ebd-43f2-84c7-9a0535dbc678-storage-provisioner-token-vdcr7") pod "storage-provisioner" (UID: "633ad387-6ebd-43f2-84c7-9a0535dbc678")
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.567328     824 reconciler.go:157] Reconciler: start to sync state
	* Nov 09 21:52:32 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:32.793622     824 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:52:32 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:32.793701     824 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:52:42 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:42.876573     824 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:52:42 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:42.876654     824 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:52:45 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:45.977192     824 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:52:46 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:46.071844     824 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:52:46 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:46.123791     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/fdf79f9c-981e-4e69-b0f4-ceb8026a763c-tmp-volume") pod "dashboard-metrics-scraper-c95fcf479-gzdqv" (UID: "fdf79f9c-981e-4e69-b0f4-ceb8026a763c")
	* Nov 09 21:52:46 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:46.124301     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-k8fnf" (UniqueName: "kubernetes.io/secret/3f8065b2-207e-42f7-bb41-d677815e9f47-kubernetes-dashboard-token-k8fnf") pod "kubernetes-dashboard-584f46694c-p6z7f" (UID: "3f8065b2-207e-42f7-bb41-d677815e9f47")
	* Nov 09 21:52:46 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:46.124551     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/3f8065b2-207e-42f7-bb41-d677815e9f47-tmp-volume") pod "kubernetes-dashboard-584f46694c-p6z7f" (UID: "3f8065b2-207e-42f7-bb41-d677815e9f47")
	* Nov 09 21:52:46 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:46.124950     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-k8fnf" (UniqueName: "kubernetes.io/secret/fdf79f9c-981e-4e69-b0f4-ceb8026a763c-kubernetes-dashboard-token-k8fnf") pod "dashboard-metrics-scraper-c95fcf479-gzdqv" (UID: "fdf79f9c-981e-4e69-b0f4-ceb8026a763c")
	* Nov 09 21:52:53 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:53.039201     824 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:52:53 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:53.039479     824 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:53:02 containerd-20201109134931-342799 kubelet[824]: I1109 21:53:02.975306     824 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6
	* Nov 09 21:53:02 containerd-20201109134931-342799 kubelet[824]: I1109 21:53:02.975817     824 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0b79a09e2c8b5606dc078925c8e8d5bca8f3b7baa7e75c3f8ff57bc286dfdc53
	* Nov 09 21:53:02 containerd-20201109134931-342799 kubelet[824]: E1109 21:53:02.976260     824 pod_workers.go:191] Error syncing pod 633ad387-6ebd-43f2-84c7-9a0535dbc678 ("storage-provisioner_kube-system(633ad387-6ebd-43f2-84c7-9a0535dbc678)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(633ad387-6ebd-43f2-84c7-9a0535dbc678)"
	* Nov 09 21:53:03 containerd-20201109134931-342799 kubelet[824]: E1109 21:53:03.128046     824 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:53:03 containerd-20201109134931-342799 kubelet[824]: E1109 21:53:03.128134     824 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* 
	* ==> kubernetes-dashboard [7a1bc5021f7c88e0e6eda72bf53ff7e40108f3887b7e44577a823a37023c5469] <==
	* 2020/11/09 21:52:47 Using namespace: kubernetes-dashboard
	* 2020/11/09 21:52:47 Using in-cluster config to connect to apiserver
	* 2020/11/09 21:52:47 Using secret token for csrf signing
	* 2020/11/09 21:52:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	* 2020/11/09 21:52:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	* 2020/11/09 21:52:47 Successful initial request to the apiserver, version: v1.19.2
	* 2020/11/09 21:52:47 Generating JWE encryption key
	* 2020/11/09 21:52:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	* 2020/11/09 21:52:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	* 2020/11/09 21:52:47 Initializing JWE encryption key from synchronized object
	* 2020/11/09 21:52:47 Creating in-cluster Sidecar client
	* 2020/11/09 21:52:47 Serving insecurely on HTTP port: 9090
	* 2020/11/09 21:52:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	* 2020/11/09 21:52:47 Starting overwatch
	* 
	* ==> storage-provisioner [0b79a09e2c8b5606dc078925c8e8d5bca8f3b7baa7e75c3f8ff57bc286dfdc53] <==
	* F1109 21:53:01.862501       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	* 
	* ==> storage-provisioner [7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6] <==
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:53:00.749536  604424 out.go:286] unable to execute * 2020-11-09 21:51:07.441009 W | etcdserver: request "header:<ID:17017825011373359164 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.82.16\" mod_revision:422 > success:<request_put:<key:\"/registry/masterleases/192.168.82.16\" value_size:68 lease:7794452974518583354 >> failure:<request_range:<key:\"/registry/masterleases/192.168.82.16\" > >>" with result "size:16" took too long (417.905089ms) to execute
	: html/template:* 2020-11-09 21:51:07.441009 W | etcdserver: request "header:<ID:17017825011373359164 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.82.16\" mod_revision:422 > success:<request_put:<key:\"/registry/masterleases/192.168.82.16\" value_size:68 lease:7794452974518583354 >> failure:<request_range:<key:\"/registry/masterleases/192.168.82.16\" > >>" with result "size:16" took too long (417.905089ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:53:00.776133  604424 out.go:286] unable to execute * 2020-11-09 21:51:12.257369 W | etcdserver: request "header:<ID:17017825011373359175 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" value_size:765 lease:7794452974518582788 >> failure:<>>" with result "size:16" took too long (2.666328699s) to execute
	: html/template:* 2020-11-09 21:51:12.257369 W | etcdserver: request "header:<ID:17017825011373359175 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" value_size:765 lease:7794452974518582788 >> failure:<>>" with result "size:16" took too long (2.666328699s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:53:03.516517  604424 logs.go:181] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6": Process exited with status 1
	stdout:
	
	stderr:
	E1109 21:53:03.508965    3402 remote_runtime.go:295] ContainerStatus "7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6": does not exist
	time="2020-11-09T21:53:03Z" level=fatal msg="rpc error: code = Unknown desc = an error occurred when try to find container \"7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6\": does not exist"
	 output: "\n** stderr ** \nE1109 21:53:03.508965    3402 remote_runtime.go:295] ContainerStatus \"7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6\" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container \"7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6\": does not exist\ntime=\"2020-11-09T21:53:03Z\" level=fatal msg=\"rpc error: code = Unknown desc = an error occurred when try to find container \\\"7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6\\\": does not exist\"\n\n** /stderr **"
	! unable to fetch logs for: storage-provisioner [7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6]

                                                
                                                
** /stderr **
helpers_test.go:243: failed logs error: exit status 110
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/containerd/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect containerd-20201109134931-342799
helpers_test.go:229: (dbg) docker inspect containerd-20201109134931-342799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500",
	        "Created": "2020-11-09T21:49:34.08306062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 587551,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-11-09T21:51:56.702777865Z",
	            "FinishedAt": "2020-11-09T21:51:54.068401138Z"
	        },
	        "Image": "sha256:e0876e0a2db41a04c8143cbd27a7a5f9e10b610cb093093def031c59e5b44b0c",
	        "ResolvConfPath": "/var/lib/docker/containers/99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500/hosts",
	        "LogPath": "/var/lib/docker/containers/99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500/99b356bd45d05677203cb4825dc4a93983c5623a7f06719039b28adcb5fd7500-json.log",
	        "Name": "/containerd-20201109134931-342799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "containerd-20201109134931-342799:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "containerd-20201109134931-342799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/26ca7d93cafe076415b98401787d6d825d1a2ffd3dcc5cc3a1fb34c684850325-init/diff:/var/lib/docker/overlay2/95e86d765bff52afd170fbce58d14aba80acb592622f18ddf93f06d4a9f68b7b/diff:/var/lib/docker/overlay2/f3b9d7ca87d0bc2c33a2b73d4acfe36b9573ed84758410f9fb2646a520f241fe/diff:/var/lib/docker/overlay2/207fc0f6ee67dd96ccb25c63e93e7189f5f397b509f8c71ead5583a80c5f94b5/diff:/var/lib/docker/overlay2/8caa5ec9f05b14f99c7708700c1734fe9d519d629a5228c0c3cb1231f1cef09e/diff:/var/lib/docker/overlay2/f2f5d9470c647e56b32cee1438db6b6249291738854b6e7a0a4a405034649aa6/diff:/var/lib/docker/overlay2/99b3672a581ed70eb5d25da5aed44455272a7adbd48148c91dc38abfbaa6e93d/diff:/var/lib/docker/overlay2/4bc3f7ac122327f368cdfb9bf82d578fce25cf7f63160aa08d869610da8889f4/diff:/var/lib/docker/overlay2/0f945a2ada7520d0f95289ddf08a8e7aedfd5dbfad8087529175cd81d61b52bc/diff:/var/lib/docker/overlay2/a14122d44a5bdb05416a0e7fbe1684738e86454c2eeb7671ecc856e6fdaa0273/diff:/var/lib/docker/overlay2/e669c0
1451df27b1aeed4835d4d9d40fae282947f2b260d9bc1d86355f464b80/diff:/var/lib/docker/overlay2/f0227d1f2e3671850f58a9e89d347d8c2a26c40d79b078f02122819452c582d8/diff:/var/lib/docker/overlay2/35b3431b00b5c7bd66b1d24487a8a45645708ce7a87848986068c0df1515b812/diff:/var/lib/docker/overlay2/c1af3457c560ed19e519e64ff919ac7035d8d9801f4f44adf7b160339827ebd0/diff:/var/lib/docker/overlay2/3861f37a52f49a93d483d69cc77384a5792b260325af8224c6225039f7480889/diff:/var/lib/docker/overlay2/05239e7556fc89f82e761ff1ddd2d1ed0ee1b426a039ad3c9a3ba0519d737f51/diff:/var/lib/docker/overlay2/cb1f04e7554841e6f2102080f7854a3c7e464e45648c1419ef2f8b4d8a03f0bb/diff:/var/lib/docker/overlay2/80908997b1122a686bee670e8dd93659dece6b51614d8576f02c11b96178534d/diff:/var/lib/docker/overlay2/6c4d0be909f5f0a95818a6009b44c355b57e39b0709de1cce2ba45597354d590/diff:/var/lib/docker/overlay2/3a855d5f2b162c2d938a5745ba8131c9a2b244967b4a03f6cddd177236f3a934/diff:/var/lib/docker/overlay2/0ab0a2a3e614830a41a389aa6ff55503f9c10b1227acfb927550f9726eb9a605/diff:/var/lib/d
ocker/overlay2/3331dc047d4c1c4d3b6bf48aba65be1ad34cff6d25329ef56e91ad8e4ee65fde/diff:/var/lib/docker/overlay2/d286c2ac5bb00869e09924bb7fc7389073b7267af1cc4c3ae6a339b29e4c0cc1/diff:/var/lib/docker/overlay2/d0a518fbd2bc93756d11040cfe4d7757f96e813dd9ddf918a54a82f7e61cb791/diff:/var/lib/docker/overlay2/2d63359a4d0da5faa54d017f487391caaee972fe4fb98db4b82e7aee2d9b01ee/diff:/var/lib/docker/overlay2/40ca142bba947004e2782bce25bd04810589fffbf8086ba91e3a074517cfa13c/diff:/var/lib/docker/overlay2/5b738c1270b81825ef44e9bd466dd8533bc4e3e3796355d7dc5dc104fb3a18c8/diff:/var/lib/docker/overlay2/c57d8a6f6b648bd2cf9d7140762b8590e1a71b9040c8a0f2b8ab31f65011bcd2/diff:/var/lib/docker/overlay2/37f83aba64f92a6cf695468f5b0a1016634c95e09933dcecec297cca22b9038b/diff:/var/lib/docker/overlay2/396d3843ed4e04893bd4bf961eab2522ce90962e626e057f5be38cbe0ed65d4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/26ca7d93cafe076415b98401787d6d825d1a2ffd3dcc5cc3a1fb34c684850325/merged",
	                "UpperDir": "/var/lib/docker/overlay2/26ca7d93cafe076415b98401787d6d825d1a2ffd3dcc5cc3a1fb34c684850325/diff",
	                "WorkDir": "/var/lib/docker/overlay2/26ca7d93cafe076415b98401787d6d825d1a2ffd3dcc5cc3a1fb34c684850325/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "containerd-20201109134931-342799",
	                "Source": "/var/lib/docker/volumes/containerd-20201109134931-342799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "containerd-20201109134931-342799",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "containerd-20201109134931-342799",
	                "name.minikube.sigs.k8s.io": "containerd-20201109134931-342799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d4f89d45be1ce4f8723c177710b2e578889efaf7c38beb91d64c55795d7f8145",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d4f89d45be1c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "containerd-20201109134931-342799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.82.16"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "99b356bd45d0"
	                    ],
	                    "NetworkID": "fd09085a8fbbe4ca9d0ddc43b8c256383d5c5545ab16526640254961fac9996d",
	                    "EndpointID": "aee87ea6d2de2a507ad7645353bfaafc1b5289a12546659010a6cd0287ed9cb1",
	                    "Gateway": "192.168.82.1",
	                    "IPAddress": "192.168.82.16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:52:10",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/VerifyKubernetesImages
helpers_test.go:238: <<< TestStartStop/group/containerd/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/containerd/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p containerd-20201109134931-342799 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/VerifyKubernetesImages
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p containerd-20201109134931-342799 logs -n 25: (2.939456687s)
helpers_test.go:246: TestStartStop/group/containerd/serial/VerifyKubernetesImages logs: 
-- stdout --
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	* 7a1bc5021f7c8       503bc4b7440b9       18 seconds ago       Running             kubernetes-dashboard        0                   8d2e2e8430cf3
	* a4d50c8d630b0       86262685d9abb       18 seconds ago       Running             dashboard-metrics-scraper   0                   c3b71169346d1
	* 92583c67fa8d6       2186a1a396deb       32 seconds ago       Running             kindnet-cni                 1                   d5c07f06b371d
	* 3924359627f80       56cc512116c8f       33 seconds ago       Running             busybox                     1                   9eec8f16758d6
	* 39af8ef96ca06       bfe3a36ebd252       33 seconds ago       Running             coredns                     1                   7efc0377102c6
	* cd4b6c56fe2e5       d373dd5a8593a       33 seconds ago       Running             kube-proxy                  1                   07fa14c3bc61a
	* 0b79a09e2c8b5       bad58561c4be7       33 seconds ago       Exited              storage-provisioner         2                   243ca5b8b332f
	* 04f2d9b343c9e       8603821e1a7a5       42 seconds ago       Running             kube-controller-manager     2                   9cdd5c9661d17
	* 2d2d50c55165e       2f32d66b884f8       42 seconds ago       Running             kube-scheduler              1                   0557d0b4588f2
	* 5a2505cd4c0a6       0369cf4303ffd       42 seconds ago       Running             etcd                        1                   67c6650d5b81d
	* 5e6b4b76b9302       607331163122e       42 seconds ago       Running             kube-apiserver              1                   d68b763d44a25
	* dd8d6ea4cc99e       56cc512116c8f       About a minute ago   Exited              busybox                     0                   37850beb46002
	* c580c7a0a7eda       bfe3a36ebd252       About a minute ago   Exited              coredns                     0                   4c4ab51d87985
	* 36e8ddd26ca0d       2186a1a396deb       2 minutes ago        Exited              kindnet-cni                 0                   34f79141a007f
	* 44e21e64db01d       d373dd5a8593a       2 minutes ago        Exited              kube-proxy                  0                   a32a8c3cf3abe
	* ac1933ccabefb       8603821e1a7a5       2 minutes ago        Exited              kube-controller-manager     1                   ea53ef91fd5f6
	* 890ee7cd37088       0369cf4303ffd       3 minutes ago        Exited              etcd                        0                   b92e947b069e9
	* d88b553934b0c       607331163122e       3 minutes ago        Exited              kube-apiserver              0                   161d601f0234c
	* 4e1008484dc44       2f32d66b884f8       3 minutes ago        Exited              kube-scheduler              0                   1d4dd1f561cfa
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2020-11-09 21:52:09 UTC, end at Mon 2020-11-09 21:53:04 UTC. --
	* Nov 09 21:52:35 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:35.426192777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20201109132758-342799,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	* Nov 09 21:52:37 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:37.659289142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/minikube-local-cache-test:functional-20201109132758-342799,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.309553110Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-c95fcf479-gzdqv,Uid:fdf79f9c-981e-4e69-b0f4-ceb8026a763c,Namespace:kubernetes-dashboard,Attempt:0,}"
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.394661175Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-584f46694c-p6z7f,Uid:3f8065b2-207e-42f7-bb41-d677815e9f47,Namespace:kubernetes-dashboard,Attempt:0,}"
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.479064166Z" level=info msg="shim containerd-shim started" address=/containerd-shim/de847b3eec7e5a64c08a3bc2059e62cd9b011099fe47bc234126dcd5a5a8e846.sock debug=false pid=2832
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.499763776Z" level=info msg="shim containerd-shim started" address=/containerd-shim/fab50136094f10da5f17d9775baddf14fb95c6944677b87d5f8278d0c12d57c4.sock debug=false pid=2848
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.685951891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-c95fcf479-gzdqv,Uid:fdf79f9c-981e-4e69-b0f4-ceb8026a763c,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"c3b71169346d192063830adda791c38eddb29178f50b7e591c8f07ffb0c788af\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.690103043Z" level=info msg="CreateContainer within sandbox \"c3b71169346d192063830adda791c38eddb29178f50b7e591c8f07ffb0c788af\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.711721810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-584f46694c-p6z7f,Uid:3f8065b2-207e-42f7-bb41-d677815e9f47,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"8d2e2e8430cf3faf315b77b96636f30aa7d65652820862c3c94be6a8fcb59b5d\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.715175638Z" level=info msg="CreateContainer within sandbox \"8d2e2e8430cf3faf315b77b96636f30aa7d65652820862c3c94be6a8fcb59b5d\" for container &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,}"
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.735884788Z" level=info msg="CreateContainer within sandbox \"c3b71169346d192063830adda791c38eddb29178f50b7e591c8f07ffb0c788af\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"a4d50c8d630b06284ac69a135e80e1f102f253ea79155c9b2a8f8779768de182\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.736448911Z" level=info msg="StartContainer for \"a4d50c8d630b06284ac69a135e80e1f102f253ea79155c9b2a8f8779768de182\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.737449713Z" level=info msg="shim containerd-shim started" address=/containerd-shim/3a9504fb22bff05b2ba7d6111bd9302695ad5c4f8211c3ddb8782741b2f39f75.sock debug=false pid=2894
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.777554092Z" level=info msg="CreateContainer within sandbox \"8d2e2e8430cf3faf315b77b96636f30aa7d65652820862c3c94be6a8fcb59b5d\" for &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,} returns container id \"7a1bc5021f7c88e0e6eda72bf53ff7e40108f3887b7e44577a823a37023c5469\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.780808460Z" level=info msg="StartContainer for \"7a1bc5021f7c88e0e6eda72bf53ff7e40108f3887b7e44577a823a37023c5469\""
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.784001391Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1f47bb0a4719e0e463bd0b9d7f05e1fc7e0a971e2b154138596415c2aceff8c9.sock debug=false pid=2911
	* Nov 09 21:52:46 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:46.969007671Z" level=info msg="StartContainer for \"a4d50c8d630b06284ac69a135e80e1f102f253ea79155c9b2a8f8779768de182\" returns successfully"
	* Nov 09 21:52:47 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:52:47.066173099Z" level=info msg="StartContainer for \"7a1bc5021f7c88e0e6eda72bf53ff7e40108f3887b7e44577a823a37023c5469\" returns successfully"
	* Nov 09 21:53:01 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:53:01.898260697Z" level=info msg="Finish piping stdout of container \"0b79a09e2c8b5606dc078925c8e8d5bca8f3b7baa7e75c3f8ff57bc286dfdc53\""
	* Nov 09 21:53:01 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:53:01.898263265Z" level=info msg="Finish piping stderr of container \"0b79a09e2c8b5606dc078925c8e8d5bca8f3b7baa7e75c3f8ff57bc286dfdc53\""
	* Nov 09 21:53:01 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:53:01.901586315Z" level=info msg="TaskExit event &TaskExit{ContainerID:0b79a09e2c8b5606dc078925c8e8d5bca8f3b7baa7e75c3f8ff57bc286dfdc53,ID:0b79a09e2c8b5606dc078925c8e8d5bca8f3b7baa7e75c3f8ff57bc286dfdc53,Pid:2163,ExitStatus:1,ExitedAt:2020-11-09 21:53:01.901045386 +0000 UTC,XXX_unrecognized:[],}"
	* Nov 09 21:53:01 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:53:01.955311926Z" level=info msg="shim reaped" id=0b79a09e2c8b5606dc078925c8e8d5bca8f3b7baa7e75c3f8ff57bc286dfdc53
	* Nov 09 21:53:02 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:53:02.995413380Z" level=info msg="RemoveContainer for \"7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6\""
	* Nov 09 21:53:03 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:53:03.004928949Z" level=info msg="RemoveContainer for \"7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6\" returns successfully"
	* Nov 09 21:53:03 containerd-20201109134931-342799 containerd[421]: time="2020-11-09T21:53:03.508223546Z" level=error msg="ContainerStatus for \"7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6\" failed" error="an error occurred when try to find container \"7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6\": does not exist"
	* 
	* ==> coredns [39af8ef96ca06df7744beff16ef4e9bd7adef7e9bbed662c7825f74f25c777a6] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* I1109 21:53:02.097815       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:52:32.093690653 +0000 UTC m=+0.032413198) (total time: 30.004069902s):
	* Trace[939984059]: [30.004069902s] [30.004069902s] END
	* E1109 21:53:02.097855       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	* I1109 21:53:02.097816       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:52:32.093595904 +0000 UTC m=+0.032318490) (total time: 30.004049744s):
	* Trace[2019727887]: [30.004049744s] [30.004049744s] END
	* E1109 21:53:02.097876       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	* I1109 21:53:02.097879       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-11-09 21:52:32.093602287 +0000 UTC m=+0.032324842) (total time: 30.00405918s):
	* Trace[1427131847]: [30.00405918s] [30.00405918s] END
	* E1109 21:53:02.097885       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	* 
	* ==> coredns [c580c7a0a7eda9845120a0b4adae01794159dea866c8c3d1c4a453ad8f686c81] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* 
	* ==> describe nodes <==
	* Name:               containerd-20201109134931-342799
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=containerd-20201109134931-342799
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=21ac2a6a37964be4739a8be2fb5a50a8d224597d
	*                     minikube.k8s.io/name=containerd-20201109134931-342799
	*                     minikube.k8s.io/updated_at=2020_11_09T13_50_28_0700
	*                     minikube.k8s.io/version=v1.14.2
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Mon, 09 Nov 2020 21:50:03 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  containerd-20201109134931-342799
	*   AcquireTime:     <unset>
	*   RenewTime:       Mon, 09 Nov 2020 21:53:00 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Mon, 09 Nov 2020 21:52:30 +0000   Mon, 09 Nov 2020 21:49:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Mon, 09 Nov 2020 21:52:30 +0000   Mon, 09 Nov 2020 21:49:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Mon, 09 Nov 2020 21:52:30 +0000   Mon, 09 Nov 2020 21:49:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Mon, 09 Nov 2020 21:52:30 +0000   Mon, 09 Nov 2020 21:50:43 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.82.16
	*   Hostname:    containerd-20201109134931-342799
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 40320265c5284752a13c6ca8ef6f6de3
	*   System UUID:                0618ea8e-43e2-48d6-84fd-daea6cb5b020
	*   Boot ID:                    9ad1ab50-5be9-48e2-8ae1-dc31113bc120
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  containerd://1.3.7
	*   Kubelet Version:            v1.19.2
	*   Kube-Proxy Version:         v1.19.2
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (11 in total)
	*   Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	*   default                     busybox                                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	*   kube-system                 coredns-f9fd979d6-dqwvt                                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m23s
	*   kube-system                 etcd-containerd-20201109134931-342799                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	*   kube-system                 kindnet-gvs6z                                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m22s
	*   kube-system                 kube-apiserver-containerd-20201109134931-342799             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	*   kube-system                 kube-controller-manager-containerd-20201109134931-342799    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m41s
	*   kube-system                 kube-proxy-chp96                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	*   kube-system                 kube-scheduler-containerd-20201109134931-342799             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	*   kube-system                 storage-provisioner                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	*   kubernetes-dashboard        dashboard-metrics-scraper-c95fcf479-gzdqv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	*   kubernetes-dashboard        kubernetes-dashboard-584f46694c-p6z7f                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (9%)   100m (1%)
	*   memory             120Mi (0%)  220Mi (0%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                    From        Message
	*   ----    ------                   ----                   ----        -------
	*   Normal  NodeHasSufficientMemory  3m11s (x5 over 3m11s)  kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    3m11s (x5 over 3m11s)  kubelet     Node containerd-20201109134931-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     3m11s (x5 over 3m11s)  kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientPID
	*   Normal  Starting                 2m32s                  kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  2m32s                  kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2m32s                  kubelet     Node containerd-20201109134931-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2m32s                  kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             2m32s                  kubelet     Node containerd-20201109134931-342799 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  2m32s                  kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                2m22s                  kubelet     Node containerd-20201109134931-342799 status is now: NodeReady
	*   Normal  Starting                 2m17s                  kube-proxy  Starting kube-proxy.
	*   Normal  Starting                 44s                    kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  44s (x8 over 44s)      kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    44s (x8 over 44s)      kubelet     Node containerd-20201109134931-342799 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     44s (x7 over 44s)      kubelet     Node containerd-20201109134931-342799 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  44s                    kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 33s                    kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.678855] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000002] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000002] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000001] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +1.273129] IPv4: martian source 10.85.0.13 from 10.85.0.13, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff f6 e9 8a 0e 0a 19 08 06        ..............
	* [  +1.464078] IPv4: martian source 10.85.0.14 from 10.85.0.14, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 26 a7 bc 25 52 eb 08 06        ......&..%R...
	* [  +0.438582] IPv4: martian source 10.85.0.15 from 10.85.0.15, on dev eth0
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 5e 56 30 8c 94 40 08 06        ......^V0..@..
	* [  +0.000728] IPv4: martian source 10.85.0.16 from 10.85.0.16, on dev eth0
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 32 a4 3e fc 48 72 08 06        ......2.>.Hr..
	* [  +3.860872] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vethd3a877be
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 9e ad 56 03 1d d4 08 06        ........V.....
	* [  +0.035505] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethc290fae4
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 02 1c 06 fe cb 70 08 06        ...........p..
	* [  +1.118377] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000003] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000001] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +0.003991] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-fd09085a8fbb
	* [  +0.000005] ll header: 00000000: 02 42 de 4e d1 5f 02 42 c0 a8 52 10 08 00        .B.N._.B..R...
	* [  +6.920951] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [5a2505cd4c0a6014393c7edf39ca5d91c3d3c4d573173cc73a81f861f1a2d9de] <==
	* 2020-11-09 21:52:22.784740 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* raft2020/11/09 21:52:22 INFO: 9364466d61f1ec2b switched to configuration voters=(10620691256855096363)
	* 2020-11-09 21:52:22.785441 I | etcdserver/membership: added member 9364466d61f1ec2b [https://192.168.82.16:2380] to cluster ce8177bd8a545254
	* 2020-11-09 21:52:22.785599 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-09 21:52:22.785647 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-09 21:52:22.790154 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-09 21:52:22.790607 I | embed: listening for peers on 192.168.82.16:2380
	* 2020-11-09 21:52:22.791204 I | embed: listening for metrics on http://127.0.0.1:2381
	* raft2020/11/09 21:52:23 INFO: 9364466d61f1ec2b is starting a new election at term 2
	* raft2020/11/09 21:52:23 INFO: 9364466d61f1ec2b became candidate at term 3
	* raft2020/11/09 21:52:23 INFO: 9364466d61f1ec2b received MsgVoteResp from 9364466d61f1ec2b at term 3
	* raft2020/11/09 21:52:23 INFO: 9364466d61f1ec2b became leader at term 3
	* raft2020/11/09 21:52:23 INFO: raft.node: 9364466d61f1ec2b elected leader 9364466d61f1ec2b at term 3
	* 2020-11-09 21:52:23.975832 I | etcdserver: published {Name:containerd-20201109134931-342799 ClientURLs:[https://192.168.82.16:2379]} to cluster ce8177bd8a545254
	* 2020-11-09 21:52:23.975862 I | embed: ready to serve client requests
	* 2020-11-09 21:52:23.978080 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-09 21:52:23.978205 I | embed: ready to serve client requests
	* 2020-11-09 21:52:23.979804 I | embed: serving client requests on 192.168.82.16:2379
	* 2020-11-09 21:52:39.737697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:52:44.977511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:52:46.103289 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/default\" " with result "range_response_count:1 size:199" took too long (117.15839ms) to execute
	* 2020-11-09 21:52:46.122274 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-gzdqv\" " with result "range_response_count:1 size:2903" took too long (116.560946ms) to execute
	* 2020-11-09 21:52:54.977470 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:53:02.968725 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (676.682026ms) to execute
	* 2020-11-09 21:53:04.977537 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> etcd [890ee7cd370886d799eb9eda7907865382550e8cd7dbe836988ae1cb52c94260] <==
	* 2020-11-09 21:50:24.824979 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (2.937568782s) to execute
	* 2020-11-09 21:50:24.825574 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/containerd-20201109134931-342799\" " with result "range_response_count:0 size:4" took too long (2.79013846s) to execute
	* 2020-11-09 21:50:24.834807 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/kube-system/\" range_end:\"/registry/resourcequotas/kube-system0\" " with result "range_response_count:0 size:4" took too long (299.83796ms) to execute
	* 2020-11-09 21:50:25.358074 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:50:38.791161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:50:40.177051 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:50:50.177392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:51:00.177687 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-09 21:51:07.441009 W | etcdserver: request "header:<ID:17017825011373359164 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.82.16\" mod_revision:422 > success:<request_put:<key:\"/registry/masterleases/192.168.82.16\" value_size:68 lease:7794452974518583354 >> failure:<request_range:<key:\"/registry/masterleases/192.168.82.16\" > >>" with result "size:16" took too long (417.905089ms) to execute
	* 2020-11-09 21:51:07.441184 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (859.114232ms) to execute
	* 2020-11-09 21:51:11.177003 W | etcdserver/api/etcdhttp: /health error; QGET failed etcdserver: request timed out (status code 503)
	* 2020-11-09 21:51:11.973210 W | wal: sync duration of 2.382323111s, expected less than 1s
	* 2020-11-09 21:51:12.257129 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (4.673826472s) to execute
	* 2020-11-09 21:51:12.257170 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5690" took too long (2.780789521s) to execute
	* 2020-11-09 21:51:12.257187 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (2.668500524s) to execute
	* 2020-11-09 21:51:12.257259 W | etcdserver: read-only range request "key:\"/registry/minions\" range_end:\"/registry/miniont\" count_only:true " with result "range_response_count:0 size:7" took too long (3.107011099s) to execute
	* 2020-11-09 21:51:12.257369 W | etcdserver: request "header:<ID:17017825011373359175 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" value_size:765 lease:7794452974518582788 >> failure:<>>" with result "size:16" took too long (2.666328699s) to execute
	* 2020-11-09 21:51:12.257785 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:8 size:40946" took too long (2.362578565s) to execute
	* 2020-11-09 21:51:12.260936 W | etcdserver: read-only range request "key:\"/registry/services/endpoints\" range_end:\"/registry/services/endpointt\" count_only:true " with result "range_response_count:0 size:7" took too long (2.306538469s) to execute
	* 2020-11-09 21:51:12.261450 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (303.30854ms) to execute
	* 2020-11-09 21:51:12.261825 W | etcdserver: read-only range request "key:\"/registry/limitranges\" range_end:\"/registry/limitranget\" count_only:true " with result "range_response_count:0 size:5" took too long (418.97084ms) to execute
	* 2020-11-09 21:51:12.262091 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (666.65172ms) to execute
	* 2020-11-09 21:51:12.262355 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:7" took too long (1.558202852s) to execute
	* 2020-11-09 21:51:12.262748 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions\" range_end:\"/registry/controllerrevisiont\" count_only:true " with result "range_response_count:0 size:7" took too long (1.922770924s) to execute
	* 2020-11-09 21:51:20.177099 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 
	* ==> kernel <==
	*  21:53:05 up  1:35,  0 users,  load average: 11.54, 10.92, 9.00
	* Linux containerd-20201109134931-342799 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [5e6b4b76b93026c5d4e92fb298546ae77346769034d7b2e7d90d097b399894ef] <==
	* I1109 21:52:30.235733       1 establishing_controller.go:76] Starting EstablishingController
	* I1109 21:52:30.234603       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
	* I1109 21:52:30.255449       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1109 21:52:30.292958       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* I1109 21:52:30.358000       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1109 21:52:30.358015       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1109 21:52:30.358073       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	* I1109 21:52:30.358644       1 cache.go:39] Caches are synced for autoregister controller
	* I1109 21:52:30.358690       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	* E1109 21:52:30.366219       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	* I1109 21:52:31.233439       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I1109 21:52:31.233480       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I1109 21:52:31.240733       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	* I1109 21:52:32.779905       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1109 21:52:33.039559       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* I1109 21:52:33.073516       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I1109 21:52:33.179897       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I1109 21:52:33.193871       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* I1109 21:52:45.896951       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	* I1109 21:52:45.964862       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	* I1109 21:52:45.974985       1 controller.go:606] quota admission added evaluator for: endpoints
	* I1109 21:52:45.976842       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	* I1109 21:53:02.969727       1 trace.go:205] Trace[700993501]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (09-Nov-2020 21:53:01.977) (total time: 991ms):
	* Trace[700993501]: ---"Transaction committed" 989ms (21:53:00.969)
	* Trace[700993501]: [991.731448ms] [991.731448ms] END
	* 
	* ==> kube-apiserver [d88b553934b0cc487ef83effd6c68709f81a7aa13f7c2068771f4ba1e4e3ee7b] <==
	* I1109 21:50:27.964860       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1109 21:50:33.957905       1 client.go:360] parsed scheme: "passthrough"
	* I1109 21:50:33.957977       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I1109 21:50:33.957991       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* I1109 21:50:42.913510       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	* I1109 21:50:43.140284       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	* I1109 21:51:07.441885       1 trace.go:205] Trace[2092517085]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (09-Nov-2020 21:51:06.280) (total time: 1161ms):
	* Trace[2092517085]: ---"Transaction committed" 1159ms (21:51:00.441)
	* Trace[2092517085]: [1.161073772s] [1.161073772s] END
	* I1109 21:51:12.258016       1 trace.go:205] Trace[2012452681]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (09-Nov-2020 21:51:09.475) (total time: 2782ms):
	* Trace[2012452681]: [2.782258557s] [2.782258557s] END
	* I1109 21:51:12.258394       1 trace.go:205] Trace[783842984]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.19.2 (linux/amd64) kubernetes/f574309,client:192.168.82.16 (09-Nov-2020 21:51:09.589) (total time: 2668ms):
	* Trace[783842984]: ---"Object stored in database" 2668ms (21:51:00.258)
	* Trace[783842984]: [2.668644744s] [2.668644744s] END
	* I1109 21:51:12.258571       1 trace.go:205] Trace[227814412]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.82.16 (09-Nov-2020 21:51:09.475) (total time: 2782ms):
	* Trace[227814412]: ---"Listing from storage done" 2782ms (21:51:00.258)
	* Trace[227814412]: [2.782849278s] [2.782849278s] END
	* I1109 21:51:12.263134       1 trace.go:205] Trace[812533960]: "List etcd3" key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: (09-Nov-2020 21:51:09.894) (total time: 2368ms):
	* Trace[812533960]: [2.368476609s] [2.368476609s] END
	* I1109 21:51:12.264808       1 trace.go:205] Trace[868845683]: "List" url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.82.1 (09-Nov-2020 21:51:09.894) (total time: 2370ms):
	* Trace[868845683]: ---"Listing from storage done" 2368ms (21:51:00.263)
	* Trace[868845683]: [2.370162076s] [2.370162076s] END
	* I1109 21:51:17.980362       1 client.go:360] parsed scheme: "passthrough"
	* I1109 21:51:17.980434       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	* I1109 21:51:17.980490       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	* 
	* ==> kube-controller-manager [04f2d9b343c9e28b1a6ac5c56a9859008e396b7bce3b769554a2f6931bf3d572] <==
	* I1109 21:52:45.957553       1 shared_informer.go:247] Caches are synced for service account 
	* I1109 21:52:45.957585       1 shared_informer.go:247] Caches are synced for TTL 
	* I1109 21:52:45.957636       1 range_allocator.go:172] Starting range CIDR allocator
	* I1109 21:52:45.957697       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	* I1109 21:52:45.957716       1 shared_informer.go:247] Caches are synced for cidrallocator 
	* I1109 21:52:45.961906       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	* I1109 21:52:45.965718       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c95fcf479-gzdqv"
	* I1109 21:52:45.967673       1 shared_informer.go:247] Caches are synced for namespace 
	* I1109 21:52:45.972293       1 shared_informer.go:247] Caches are synced for endpoint 
	* I1109 21:52:45.983863       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-584f46694c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-584f46694c-p6z7f"
	* I1109 21:52:46.061032       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I1109 21:52:46.065442       1 shared_informer.go:247] Caches are synced for daemon sets 
	* I1109 21:52:46.081008       1 shared_informer.go:247] Caches are synced for stateful set 
	* I1109 21:52:46.099396       1 shared_informer.go:247] Caches are synced for disruption 
	* I1109 21:52:46.099431       1 disruption.go:339] Sending events to api server.
	* I1109 21:52:46.101267       1 shared_informer.go:247] Caches are synced for attach detach 
	* I1109 21:52:46.105022       1 shared_informer.go:247] Caches are synced for expand 
	* I1109 21:52:46.133448       1 shared_informer.go:247] Caches are synced for PVC protection 
	* I1109 21:52:46.141181       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:52:46.143327       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1109 21:52:46.182306       1 shared_informer.go:247] Caches are synced for ReplicationController 
	* I1109 21:52:46.214978       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:52:46.469164       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:52:46.469201       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:52:46.516645       1 shared_informer.go:247] Caches are synced for garbage collector 
	* 
	* ==> kube-controller-manager [ac1933ccabefbbff7c4695d3fe105ac2d2f27673a9ebab9320975cbad7e53eed] <==
	* I1109 21:50:43.100323       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-containerd-20201109134931-342799" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	* I1109 21:50:43.128990       1 shared_informer.go:247] Caches are synced for daemon sets 
	* I1109 21:50:43.136047       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	* I1109 21:50:43.136078       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	* I1109 21:50:43.172701       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gvs6z"
	* I1109 21:50:43.175364       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I1109 21:50:43.176059       1 shared_informer.go:247] Caches are synced for attach detach 
	* I1109 21:50:43.176566       1 shared_informer.go:247] Caches are synced for GC 
	* I1109 21:50:43.176687       1 shared_informer.go:247] Caches are synced for TTL 
	* I1109 21:50:43.181678       1 shared_informer.go:247] Caches are synced for node 
	* I1109 21:50:43.181733       1 range_allocator.go:172] Starting range CIDR allocator
	* I1109 21:50:43.181739       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	* I1109 21:50:43.181745       1 shared_informer.go:247] Caches are synced for cidrallocator 
	* E1109 21:50:43.191133       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	* I1109 21:50:43.203091       1 range_allocator.go:373] Set node containerd-20201109134931-342799 PodCIDR to [10.244.0.0/24]
	* I1109 21:50:43.218781       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1109 21:50:43.219355       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-chp96"
	* E1109 21:50:43.228453       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"e0cd4310-7660-47d0-85e2-31d5e05c4ff2", ResourceVersion:"242", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740555428, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-syste
m\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:0.5.4\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"e
ffect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0019074a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019074c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019074e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]str
ing{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001907500), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil
), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001907520), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001907540), Empt
yDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil
), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:0.5.4", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001907560)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0019075a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDe
cAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Li
fecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00160f020), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ef4058), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000417490), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Hos
tAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0011581c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000ef40c0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	* E1109 21:50:43.264108       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"fa2dde10-2899-496f-b231-4526182b2289", ResourceVersion:"225", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740555427, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001907380), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0xc0019073a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0019073c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)
(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001234180), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v
1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0019073e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersi
stentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001907400), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.
DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"",
ValueFrom:(*v1.EnvVarSource)(0xc001907440)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00160efc0), Stdin:false, StdinOnce:false, TTY:false}},
EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000d7bdf8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004173b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConf
ig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0011581c0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000d7be48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* I1109 21:50:43.519030       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:50:43.524821       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1109 21:50:43.524878       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1109 21:50:43.694917       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-f9fd979d6 to 1"
	* I1109 21:50:43.716567       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-f9fd979d6-95r2l"
	* I1109 21:50:48.088846       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	* 
	* ==> kube-proxy [44e21e64db01dc07094e6d9988eb6874df124a8a86c97d7224ae6bac80bbfec6] <==
	* I1109 21:50:48.819576       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:50:48.819800       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:50:48.988907       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:50:48.989014       1 server_others.go:186] Using iptables Proxier.
	* I1109 21:50:48.989410       1 server.go:650] Version: v1.19.2
	* I1109 21:50:48.990188       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:50:48.990806       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:50:48.990874       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:50:48.991086       1 config.go:315] Starting service config controller
	* I1109 21:50:48.991100       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:50:48.991220       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:50:48.991273       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:50:49.091358       1 shared_informer.go:247] Caches are synced for service config 
	* I1109 21:50:49.091376       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-proxy [cd4b6c56fe2e5480b780d5e22d95bc29c9d6df74f20a127c58a3ea9657970095] <==
	* I1109 21:52:31.969248       1 node.go:136] Successfully retrieved node IP: 192.168.82.16
	* I1109 21:52:31.969555       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.82.16), assume IPv4 operation
	* W1109 21:52:32.119708       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1109 21:52:32.119815       1 server_others.go:186] Using iptables Proxier.
	* I1109 21:52:32.120142       1 server.go:650] Version: v1.19.2
	* I1109 21:52:32.120808       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1109 21:52:32.120938       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1109 21:52:32.120991       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1109 21:52:32.121742       1 config.go:315] Starting service config controller
	* I1109 21:52:32.121752       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1109 21:52:32.121787       1 config.go:224] Starting endpoint slice config controller
	* I1109 21:52:32.121792       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1109 21:52:32.222090       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I1109 21:52:32.222125       1 shared_informer.go:247] Caches are synced for service config 
	* 
	* ==> kube-scheduler [2d2d50c55165ec543eecaee424348d515792f6855823d31259bf4a6adf2eab1a] <==
	* I1109 21:52:22.974805       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:22.974895       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:24.610180       1 serving.go:331] Generated self-signed cert in-memory
	* W1109 21:52:30.259422       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1109 21:52:30.259461       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W1109 21:52:30.259501       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1109 21:52:30.259510       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1109 21:52:30.388725       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:30.388751       1 registry.go:173] Registering SelectorSpread plugin
	* I1109 21:52:30.393343       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:52:30.393387       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1109 21:52:30.393747       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1109 21:52:30.393811       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1109 21:52:30.493717       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kube-scheduler [4e1008484dc44d16f3659d0cce2eeb98c2f0e2af29fdfe7722eb1cb86d7d7a6d] <==
	* E1109 21:50:10.716945       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:11.097929       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:11.130303       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:50:11.322939       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:50:12.204078       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:50:12.633512       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:12.892331       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:50:13.125423       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:50:13.462016       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:50:13.466241       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1109 21:50:13.577457       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:50:17.881755       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1109 21:50:18.728946       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1109 21:50:18.979164       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:19.064843       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1109 21:50:19.500414       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:19.668754       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1109 21:50:20.397004       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1109 21:50:21.403896       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1109 21:50:23.033702       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1109 21:50:23.260442       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1109 21:50:23.283191       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1109 21:50:23.694181       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1109 21:50:25.358785       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* I1109 21:50:42.959397       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2020-11-09 21:52:09 UTC, end at Mon 2020-11-09 21:53:06 UTC. --
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.466330     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-sp4dk" (UniqueName: "kubernetes.io/secret/f3f389ed-b66d-493e-9437-e28c846398fc-coredns-token-sp4dk") pod "coredns-f9fd979d6-dqwvt" (UID: "f3f389ed-b66d-493e-9437-e28c846398fc")
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.466354     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9f178faa-40cb-4f8b-8b92-4812dc53ac67-kube-proxy") pod "kube-proxy-chp96" (UID: "9f178faa-40cb-4f8b-8b92-4812dc53ac67")
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.466386     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-bzgzs" (UniqueName: "kubernetes.io/secret/9f178faa-40cb-4f8b-8b92-4812dc53ac67-kube-proxy-token-bzgzs") pod "kube-proxy-chp96" (UID: "9f178faa-40cb-4f8b-8b92-4812dc53ac67")
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.469078     824 kubelet_node_status.go:108] Node containerd-20201109134931-342799 was previously registered
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.469432     824 kubelet_node_status.go:73] Successfully registered node containerd-20201109134931-342799
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.567124     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/633ad387-6ebd-43f2-84c7-9a0535dbc678-tmp") pod "storage-provisioner" (UID: "633ad387-6ebd-43f2-84c7-9a0535dbc678")
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.567175     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-vdcr7" (UniqueName: "kubernetes.io/secret/633ad387-6ebd-43f2-84c7-9a0535dbc678-storage-provisioner-token-vdcr7") pod "storage-provisioner" (UID: "633ad387-6ebd-43f2-84c7-9a0535dbc678")
	* Nov 09 21:52:30 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:30.567328     824 reconciler.go:157] Reconciler: start to sync state
	* Nov 09 21:52:32 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:32.793622     824 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:52:32 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:32.793701     824 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:52:42 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:42.876573     824 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:52:42 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:42.876654     824 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:52:45 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:45.977192     824 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:52:46 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:46.071844     824 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 09 21:52:46 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:46.123791     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/fdf79f9c-981e-4e69-b0f4-ceb8026a763c-tmp-volume") pod "dashboard-metrics-scraper-c95fcf479-gzdqv" (UID: "fdf79f9c-981e-4e69-b0f4-ceb8026a763c")
	* Nov 09 21:52:46 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:46.124301     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-k8fnf" (UniqueName: "kubernetes.io/secret/3f8065b2-207e-42f7-bb41-d677815e9f47-kubernetes-dashboard-token-k8fnf") pod "kubernetes-dashboard-584f46694c-p6z7f" (UID: "3f8065b2-207e-42f7-bb41-d677815e9f47")
	* Nov 09 21:52:46 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:46.124551     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/3f8065b2-207e-42f7-bb41-d677815e9f47-tmp-volume") pod "kubernetes-dashboard-584f46694c-p6z7f" (UID: "3f8065b2-207e-42f7-bb41-d677815e9f47")
	* Nov 09 21:52:46 containerd-20201109134931-342799 kubelet[824]: I1109 21:52:46.124950     824 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-k8fnf" (UniqueName: "kubernetes.io/secret/fdf79f9c-981e-4e69-b0f4-ceb8026a763c-kubernetes-dashboard-token-k8fnf") pod "dashboard-metrics-scraper-c95fcf479-gzdqv" (UID: "fdf79f9c-981e-4e69-b0f4-ceb8026a763c")
	* Nov 09 21:52:53 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:53.039201     824 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:52:53 containerd-20201109134931-342799 kubelet[824]: E1109 21:52:53.039479     824 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Nov 09 21:53:02 containerd-20201109134931-342799 kubelet[824]: I1109 21:53:02.975306     824 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7e611def4558b78c52f5acda651efcc4dd98cca0d89d034708b02c646b8946f6
	* Nov 09 21:53:02 containerd-20201109134931-342799 kubelet[824]: I1109 21:53:02.975817     824 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0b79a09e2c8b5606dc078925c8e8d5bca8f3b7baa7e75c3f8ff57bc286dfdc53
	* Nov 09 21:53:02 containerd-20201109134931-342799 kubelet[824]: E1109 21:53:02.976260     824 pod_workers.go:191] Error syncing pod 633ad387-6ebd-43f2-84c7-9a0535dbc678 ("storage-provisioner_kube-system(633ad387-6ebd-43f2-84c7-9a0535dbc678)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(633ad387-6ebd-43f2-84c7-9a0535dbc678)"
	* Nov 09 21:53:03 containerd-20201109134931-342799 kubelet[824]: E1109 21:53:03.128046     824 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Nov 09 21:53:03 containerd-20201109134931-342799 kubelet[824]: E1109 21:53:03.128134     824 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* 
	* ==> kubernetes-dashboard [7a1bc5021f7c88e0e6eda72bf53ff7e40108f3887b7e44577a823a37023c5469] <==
	* 2020/11/09 21:52:47 Starting overwatch
	* 2020/11/09 21:52:47 Using namespace: kubernetes-dashboard
	* 2020/11/09 21:52:47 Using in-cluster config to connect to apiserver
	* 2020/11/09 21:52:47 Using secret token for csrf signing
	* 2020/11/09 21:52:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	* 2020/11/09 21:52:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	* 2020/11/09 21:52:47 Successful initial request to the apiserver, version: v1.19.2
	* 2020/11/09 21:52:47 Generating JWE encryption key
	* 2020/11/09 21:52:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	* 2020/11/09 21:52:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	* 2020/11/09 21:52:47 Initializing JWE encryption key from synchronized object
	* 2020/11/09 21:52:47 Creating in-cluster Sidecar client
	* 2020/11/09 21:52:47 Serving insecurely on HTTP port: 9090
	* 2020/11/09 21:52:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	* 
	* ==> storage-provisioner [0b79a09e2c8b5606dc078925c8e8d5bca8f3b7baa7e75c3f8ff57bc286dfdc53] <==
	* F1109 21:53:01.862501       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:53:05.887092  605493 out.go:286] unable to execute * 2020-11-09 21:51:07.441009 W | etcdserver: request "header:<ID:17017825011373359164 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.82.16\" mod_revision:422 > success:<request_put:<key:\"/registry/masterleases/192.168.82.16\" value_size:68 lease:7794452974518583354 >> failure:<request_range:<key:\"/registry/masterleases/192.168.82.16\" > >>" with result "size:16" took too long (417.905089ms) to execute
	: html/template:* 2020-11-09 21:51:07.441009 W | etcdserver: request "header:<ID:17017825011373359164 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.82.16\" mod_revision:422 > success:<request_put:<key:\"/registry/masterleases/192.168.82.16\" value_size:68 lease:7794452974518583354 >> failure:<request_range:<key:\"/registry/masterleases/192.168.82.16\" > >>" with result "size:16" took too long (417.905089ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1109 13:53:05.918082  605493 out.go:286] unable to execute * 2020-11-09 21:51:12.257369 W | etcdserver: request "header:<ID:17017825011373359175 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" value_size:765 lease:7794452974518582788 >> failure:<>>" with result "size:16" took too long (2.666328699s) to execute
	: html/template:* 2020-11-09 21:51:12.257369 W | etcdserver: request "header:<ID:17017825011373359175 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-containerd-20201109134931-342799.1645f56813b78812\" value_size:765 lease:7794452974518582788 >> failure:<>>" with result "size:16" took too long (2.666328699s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799
helpers_test.go:255: (dbg) Run:  kubectl --context containerd-20201109134931-342799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestStartStop/group/containerd/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context containerd-20201109134931-342799 describe pod 
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context containerd-20201109134931-342799 describe pod : exit status 1 (94.333569ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context containerd-20201109134931-342799 describe pod : exit status 1
--- FAIL: TestStartStop/group/containerd/serial/VerifyKubernetesImages (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
fn_tunnel_cmd_test.go:145: (dbg) Run:  kubectl --context functional-20201109132758-342799 apply -f testdata/testsvc.yaml
fn_tunnel_cmd_test.go:145: (dbg) Non-zero exit: kubectl --context functional-20201109132758-342799 apply -f testdata/testsvc.yaml: exit status 1 (85.615512ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.16:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
fn_tunnel_cmd_test.go:147: kubectl --context functional-20201109132758-342799 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (128.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessDirect
fn_tunnel_cmd_test.go:217: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
fn_tunnel_cmd_test.go:219: (dbg) Run:  kubectl --context functional-20201109132758-342799 get svc nginx-svc
fn_tunnel_cmd_test.go:219: (dbg) Non-zero exit: kubectl --context functional-20201109132758-342799 get svc nginx-svc: exit status 1 (79.16169ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): services "nginx-svc" not found

                                                
                                                
** /stderr **
fn_tunnel_cmd_test.go:221: kubectl --context functional-20201109132758-342799 get svc nginx-svc failed: exit status 1
fn_tunnel_cmd_test.go:223: failed to kubectl get svc nginx-svc:
fn_tunnel_cmd_test.go:230: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (128.91s)

                                                
                                    

Test pass (188/209)

passed test Duration
TestDownloadOnly/crio/v1.13.0 5.77
TestDownloadOnly/crio/v1.19.2 6.62
TestDownloadOnly/crio/v1.19.2#01 0.65
TestDownloadOnly/crio/DeleteAll 0.54
TestDownloadOnly/crio/DeleteAlwaysSucceeds 0.32
TestDownloadOnly/docker/v1.13.0 4.24
TestDownloadOnly/docker/v1.19.2 4.35
TestDownloadOnly/docker/v1.19.2#01 0.65
TestDownloadOnly/docker/DeleteAll 0.51
TestDownloadOnly/docker/DeleteAlwaysSucceeds 0.31
TestDownloadOnly/containerd/v1.13.0 6.93
TestDownloadOnly/containerd/v1.19.2 26.98
TestDownloadOnly/containerd/v1.19.2#01 0.62
TestDownloadOnly/containerd/DeleteAll 0.49
TestDownloadOnly/containerd/DeleteAlwaysSucceeds 0.3
TestDownloadOnlyKic 2.46
TestOffline/group/docker 102.51
TestOffline/group/crio 133.66
TestOffline/group/containerd 106.1
TestAddons/parallel/Ingress 17.16
TestAddons/parallel/MetricsServer 36.51
TestAddons/parallel/HelmTiller 12.3
TestAddons/parallel/CSI 125.33
TestAddons/parallel/GCPAuth 47.73
TestCertOptions 126.33
TestDockerFlags 90.08
TestForceSystemdFlag 96.87
TestForceSystemdEnv 106.85
TestErrorSpam 89.11
TestFunctional/serial/CopySyncFile 0
TestFunctional/serial/StartWithProxy 61.21
TestFunctional/serial/SoftStart 47.78
TestFunctional/serial/KubeContext 0.06
TestFunctional/serial/KubectlGetPods 0.47
TestFunctional/serial/CacheCmd/cache/add_remote 4.22
TestFunctional/serial/CacheCmd/cache/add_local 1.08
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
TestFunctional/serial/CacheCmd/cache/list 0.06
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
TestFunctional/serial/CacheCmd/cache/cache_reload 2.28
TestFunctional/serial/CacheCmd/cache/delete 0.12
TestFunctional/serial/MinikubeKubectlCmd 0.39
TestFunctional/serial/MinikubeKubectlCmdDirectly 0.4
TestJSONOutput/start/parallel/DistinctCurrentSteps 0
TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
TestJSONOutputError 0.56
TestMultiNode/serial/FreshStart2Nodes 76.69
TestMultiNode/serial/AddNode 27.94
TestMultiNode/serial/StopNode 2.89
TestMultiNode/serial/StartAfterStop 59.46
TestMultiNode/serial/DeleteNode 6.34
TestMultiNode/serial/StopMultiNode 7.85
TestMultiNode/serial/RestartMultiNode 95.21
TestPreload 109.44
TestInsufficientStorage 12.9
TestRunningBinaryUpgrade 203.18
TestStoppedBinaryUpgrade 119.31
TestKubernetesUpgrade 324.51
TestMissingContainerUpgrade 425.03
TestPause/serial/Start 175.03
TestFunctional/parallel/ComponentHealth 0.37
TestFunctional/parallel/ConfigCmd 0.43
TestFunctional/parallel/DashboardCmd 3.93
TestFunctional/parallel/DryRun 0.79
TestFunctional/parallel/StatusCmd 1.34
TestFunctional/parallel/LogsCmd 4.28
TestFunctional/parallel/MountCmd 5.49
TestFunctional/parallel/ServiceCmd 15.09
TestFunctional/parallel/AddonsCmd 0.22
TestFunctional/parallel/PersistentVolumeClaim 157.23
TestFunctional/parallel/SSHCmd 0.75
TestFunctional/parallel/FileSync 0.46
TestFunctional/parallel/CertSync 1.26
TestFunctional/parallel/NodeLabels 0.09
TestStartStop/group/old-k8s-version/serial/FirstStart 130.04
TestPause/serial/Pause 0.74
TestPause/serial/VerifyStatus 0.55
TestPause/serial/Unpause 0.76
TestPause/serial/PauseAgain 1.05
TestPause/serial/DeletePaused 4.62
TestStartStop/group/crio/serial/FirstStart 180.09
TestPause/serial/VerifyDeletedResources 9.14
TestStartStop/group/embed-certs/serial/FirstStart 71.54
TestStartStop/group/embed-certs/serial/DeployApp 13.54
TestStartStop/group/embed-certs/serial/Stop 11.33
TestStartStop/group/old-k8s-version/serial/DeployApp 8.81
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
TestStartStop/group/embed-certs/serial/SecondStart 51.54
TestStartStop/group/old-k8s-version/serial/Stop 11.52
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
TestStartStop/group/old-k8s-version/serial/SecondStart 56.33
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.02
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.01
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8.02
TestStartStop/group/crio/serial/DeployApp 10.9
TestStartStop/group/embed-certs/serial/Pause 3.54
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.01
TestStartStop/group/containerd/serial/FirstStart 107.77
TestStartStop/group/crio/serial/Stop 21.29
TestStartStop/group/old-k8s-version/serial/Pause 3.71
TestStartStop/group/newest-cni/serial/FirstStart 91.72
TestStartStop/group/crio/serial/EnableAddonAfterStop 0.28
TestStartStop/group/crio/serial/SecondStart 56.72
TestStartStop/group/crio/serial/UserAppExistsAfterStop 15.61
TestNetworkPlugins/group/auto/Start 100.93
TestStartStop/group/crio/serial/AddonExistsAfterStop 5.22
TestStartStop/group/containerd/serial/DeployApp 10.09
TestStartStop/group/crio/serial/Pause 4.5
TestStartStop/group/newest-cni/serial/DeployApp 0
TestStartStop/group/newest-cni/serial/Stop 11.43
TestStartStop/group/containerd/serial/Stop 25.15
TestNetworkPlugins/group/false/Start 79.29
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.43
TestStartStop/group/newest-cni/serial/SecondStart 58.49
TestStartStop/group/containerd/serial/EnableAddonAfterStop 0.33
TestStartStop/group/containerd/serial/SecondStart 42.3
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
TestStartStop/group/containerd/serial/UserAppExistsAfterStop 16.03
TestStartStop/group/newest-cni/serial/Pause 3.42
TestNetworkPlugins/group/auto/KubeletFlags 0.37
TestNetworkPlugins/group/auto/NetCatPod 9.92
TestNetworkPlugins/group/false/KubeletFlags 0.38
TestNetworkPlugins/group/false/NetCatPod 13.42
TestNetworkPlugins/group/cilium/Start 167.82
TestStartStop/group/containerd/serial/AddonExistsAfterStop 5.01
TestNetworkPlugins/group/auto/DNS 34.73
TestNetworkPlugins/group/false/DNS 386.08
TestStartStop/group/containerd/serial/Pause 3.72
TestNetworkPlugins/group/calico/Start 156.43
TestNetworkPlugins/group/auto/Localhost 0.33
TestNetworkPlugins/group/auto/HairPin 7.01
TestNetworkPlugins/group/custom-weave/Start 111.83
TestNetworkPlugins/group/cilium/ControllerPod 5.03
TestNetworkPlugins/group/custom-weave/KubeletFlags 0.4
TestNetworkPlugins/group/custom-weave/NetCatPod 14.37
TestNetworkPlugins/group/cilium/KubeletFlags 0.43
TestNetworkPlugins/group/cilium/NetCatPod 14.57
TestNetworkPlugins/group/calico/ControllerPod 5.03
TestNetworkPlugins/group/enable-default-cni/Start 71.03
TestNetworkPlugins/group/calico/KubeletFlags 0.41
TestNetworkPlugins/group/calico/NetCatPod 14.44
TestNetworkPlugins/group/cilium/DNS 134.43
TestNetworkPlugins/group/calico/DNS 0.65
TestNetworkPlugins/group/calico/Localhost 0.34
TestNetworkPlugins/group/calico/HairPin 0.4
TestNetworkPlugins/group/kindnet/Start 89.4
TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
TestNetworkPlugins/group/enable-default-cni/NetCatPod 16.37
TestNetworkPlugins/group/enable-default-cni/DNS 0.28
TestNetworkPlugins/group/enable-default-cni/Localhost 0.29
TestNetworkPlugins/group/enable-default-cni/HairPin 0.29
TestNetworkPlugins/group/bridge/Start 63.77
TestNetworkPlugins/group/kindnet/ControllerPod 5.03
TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
TestNetworkPlugins/group/kindnet/NetCatPod 15.5
TestNetworkPlugins/group/kindnet/DNS 3.96
TestNetworkPlugins/group/kindnet/Localhost 0.29
TestNetworkPlugins/group/kindnet/HairPin 0.32
TestNetworkPlugins/group/cilium/Localhost 0.3
TestNetworkPlugins/group/cilium/HairPin 0.24
TestNetworkPlugins/group/kubenet/Start 65.16
TestNetworkPlugins/group/bridge/KubeletFlags 0.38
TestNetworkPlugins/group/bridge/NetCatPod 12.3
TestNetworkPlugins/group/bridge/DNS 7.29
TestNetworkPlugins/group/bridge/Localhost 0.29
TestNetworkPlugins/group/bridge/HairPin 0.29
TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
TestNetworkPlugins/group/kubenet/KubeletFlags 0.36
TestNetworkPlugins/group/kubenet/NetCatPod 13.36
TestNetworkPlugins/group/false/Localhost 0.26
TestNetworkPlugins/group/false/HairPin 5.26
TestNetworkPlugins/group/kubenet/DNS 0.33
TestNetworkPlugins/group/kubenet/Localhost 0.26
TestNetworkPlugins/group/kubenet/HairPin 0.25
TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
TestFunctional/parallel/ProfileCmd/profile_list 0.41
TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
x
+
TestDownloadOnly/crio/v1.13.0 (5.77s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.13.0
aaa_download_only_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p crio-20201109131944-342799 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=crio --driver=docker 
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20201109131944-342799 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=crio --driver=docker : (5.770019298s)
--- PASS: TestDownloadOnly/crio/v1.13.0 (5.77s)

                                                
                                    
x
+
TestDownloadOnly/crio/v1.19.2 (6.62s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.19.2
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p crio-20201109131944-342799 --force --alsologtostderr --kubernetes-version=v1.19.2 --container-runtime=crio --driver=docker 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20201109131944-342799 --force --alsologtostderr --kubernetes-version=v1.19.2 --container-runtime=crio --driver=docker : (6.620008918s)
--- PASS: TestDownloadOnly/crio/v1.19.2 (6.62s)

                                                
                                    
x
+
TestDownloadOnly/crio/v1.19.2#01 (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.19.2#01
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p crio-20201109131944-342799 --force --alsologtostderr --kubernetes-version=v1.19.2 --container-runtime=crio --driver=docker 
--- PASS: TestDownloadOnly/crio/v1.19.2#01 (0.65s)

                                                
                                    
x
+
TestDownloadOnly/crio/DeleteAll (0.54s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/DeleteAll
aaa_download_only_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/crio/DeleteAll (0.54s)

                                                
                                    
x
+
TestDownloadOnly/crio/DeleteAlwaysSucceeds (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/DeleteAlwaysSucceeds
aaa_download_only_test.go:145: (dbg) Run:  out/minikube-linux-amd64 delete -p crio-20201109131944-342799
--- PASS: TestDownloadOnly/crio/DeleteAlwaysSucceeds (0.32s)

                                                
                                    
x
+
TestDownloadOnly/docker/v1.13.0 (4.24s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.13.0
aaa_download_only_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p docker-20201109131958-342799 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20201109131958-342799 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=docker --driver=docker : (4.235341651s)
--- PASS: TestDownloadOnly/docker/v1.13.0 (4.24s)

                                                
                                    
x
+
TestDownloadOnly/docker/v1.19.2 (4.35s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.19.2
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p docker-20201109131958-342799 --force --alsologtostderr --kubernetes-version=v1.19.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20201109131958-342799 --force --alsologtostderr --kubernetes-version=v1.19.2 --container-runtime=docker --driver=docker : (4.352511855s)
--- PASS: TestDownloadOnly/docker/v1.19.2 (4.35s)

                                                
                                    
x
+
TestDownloadOnly/docker/v1.19.2#01 (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.19.2#01
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p docker-20201109131958-342799 --force --alsologtostderr --kubernetes-version=v1.19.2 --container-runtime=docker --driver=docker 
--- PASS: TestDownloadOnly/docker/v1.19.2#01 (0.65s)

                                                
                                    
x
+
TestDownloadOnly/docker/DeleteAll (0.51s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/DeleteAll
aaa_download_only_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/docker/DeleteAll (0.51s)

                                                
                                    
x
+
TestDownloadOnly/docker/DeleteAlwaysSucceeds (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/DeleteAlwaysSucceeds
aaa_download_only_test.go:145: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-20201109131958-342799
--- PASS: TestDownloadOnly/docker/DeleteAlwaysSucceeds (0.31s)

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.13.0 (6.93s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.13.0
aaa_download_only_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p containerd-20201109132009-342799 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=containerd --driver=docker 
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20201109132009-342799 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=containerd --driver=docker : (6.927878167s)
--- PASS: TestDownloadOnly/containerd/v1.13.0 (6.93s)

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.19.2 (26.98s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.19.2
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p containerd-20201109132009-342799 --force --alsologtostderr --kubernetes-version=v1.19.2 --container-runtime=containerd --driver=docker 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20201109132009-342799 --force --alsologtostderr --kubernetes-version=v1.19.2 --container-runtime=containerd --driver=docker : (26.98014856s)
--- PASS: TestDownloadOnly/containerd/v1.19.2 (26.98s)

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.19.2#01 (0.62s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.19.2#01
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p containerd-20201109132009-342799 --force --alsologtostderr --kubernetes-version=v1.19.2 --container-runtime=containerd --driver=docker 
--- PASS: TestDownloadOnly/containerd/v1.19.2#01 (0.62s)

                                                
                                    
x
+
TestDownloadOnly/containerd/DeleteAll (0.49s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/DeleteAll
aaa_download_only_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/containerd/DeleteAll (0.49s)

                                                
                                    
x
+
TestDownloadOnly/containerd/DeleteAlwaysSucceeds (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/DeleteAlwaysSucceeds
aaa_download_only_test.go:145: (dbg) Run:  out/minikube-linux-amd64 delete -p containerd-20201109132009-342799
--- PASS: TestDownloadOnly/containerd/DeleteAlwaysSucceeds (0.30s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.46s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20201109132044-342799 --force --alsologtostderr --driver=docker 
helpers_test.go:171: Cleaning up "download-docker-20201109132044-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20201109132044-342799
--- PASS: TestDownloadOnlyKic (2.46s)

                                                
                                    
x
+
TestOffline/group/docker (102.51s)

                                                
                                                
=== RUN   TestOffline/group/docker
=== PAUSE TestOffline/group/docker

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/docker

                                                
                                                
=== CONT  TestOffline/group/docker
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20201109132047-342799 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=docker 

                                                
                                                
=== CONT  TestOffline/group/docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20201109132047-342799 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=docker : (1m39.433600308s)
helpers_test.go:171: Cleaning up "offline-docker-20201109132047-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20201109132047-342799
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20201109132047-342799: (3.072869626s)
--- PASS: TestOffline/group/docker (102.51s)

                                                
                                    
x
+
TestOffline/group/crio (133.66s)

                                                
                                                
=== RUN   TestOffline/group/crio
=== PAUSE TestOffline/group/crio

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/crio

                                                
                                                
=== CONT  TestOffline/group/crio
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20201109132047-342799 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=docker 

                                                
                                                
=== CONT  TestOffline/group/crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20201109132047-342799 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=docker : (2m10.688066957s)
helpers_test.go:171: Cleaning up "offline-crio-20201109132047-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-20201109132047-342799
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20201109132047-342799: (2.971561046s)
--- PASS: TestOffline/group/crio (133.66s)

                                                
                                    
x
+
TestOffline/group/containerd (106.1s)

                                                
                                                
=== RUN   TestOffline/group/containerd
=== PAUSE TestOffline/group/containerd

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/containerd

                                                
                                                
=== CONT  TestOffline/group/containerd
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20201109132047-342799 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=docker 

                                                
                                                
=== CONT  TestOffline/group/containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20201109132047-342799 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=docker : (1m42.935881774s)
helpers_test.go:171: Cleaning up "offline-containerd-20201109132047-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20201109132047-342799
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20201109132047-342799: (3.167375082s)
--- PASS: TestOffline/group/containerd (106.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:126: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "kube-system" ...
helpers_test.go:333: "ingress-nginx-admission-create-czqhj" [641cc0ad-3b19-4792-b6f0-720ce988118b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:126: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 79.242513ms
addons_test.go:131: (dbg) Run:  kubectl --context addons-20201109132301-342799 replace --force -f testdata/nginx-ing.yaml
addons_test.go:136: kubectl --context addons-20201109132301-342799 replace --force -f testdata/nginx-ing.yaml: unexpected stderr: Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
(may be temproary)
addons_test.go:145: (dbg) Run:  kubectl --context addons-20201109132301-342799 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:150: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:333: "nginx" [4ce33fd3-0995-43c3-ad14-201428ed7f5a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:333: "nginx" [4ce33fd3-0995-43c3-ad14-201428ed7f5a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:150: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.006271902s
addons_test.go:160: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201109132301-342799 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:181: (dbg) Done: out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable ingress --alsologtostderr -v=1: (2.553257812s)
--- PASS: TestAddons/parallel/Ingress (17.16s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (36.51s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:275: metrics-server stabilized in 21.977643ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:277: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:333: "metrics-server-d9b576748-kl5sr" [7b266acd-fa8d-4cf4-88d0-e960bc2e494b] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:277: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.035313557s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:283: (dbg) Run:  kubectl --context addons-20201109132301-342799 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:283: (dbg) Non-zero exit: kubectl --context addons-20201109132301-342799 top pods -n kube-system: exit status 1 (162.228443ms)

                                                
                                                
** stderr ** 
	W1109 13:25:43.587195  368731 top_pod.go:265] Metrics not available for pod kube-system/etcd-addons-20201109132301-342799, age: 2m8.587184489s
	error: Metrics not available for pod kube-system/etcd-addons-20201109132301-342799, age: 2m8.587184489s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:283: (dbg) Run:  kubectl --context addons-20201109132301-342799 top pods -n kube-system
addons_test.go:283: (dbg) Non-zero exit: kubectl --context addons-20201109132301-342799 top pods -n kube-system: exit status 1 (99.199203ms)

                                                
                                                
** stderr ** 
	W1109 13:25:47.000736  368796 top_pod.go:265] Metrics not available for pod kube-system/coredns-f9fd979d6-6sj4j, age: 2m0.000723981s
	error: Metrics not available for pod kube-system/coredns-f9fd979d6-6sj4j, age: 2m0.000723981s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:283: (dbg) Run:  kubectl --context addons-20201109132301-342799 top pods -n kube-system
addons_test.go:283: (dbg) Non-zero exit: kubectl --context addons-20201109132301-342799 top pods -n kube-system: exit status 1 (1.010440808s)

                                                
                                                
** stderr ** 
	W1109 13:25:54.494221  369322 top_pod.go:265] Metrics not available for pod kube-system/coredns-f9fd979d6-6sj4j, age: 2m7.494209233s
	error: Metrics not available for pod kube-system/coredns-f9fd979d6-6sj4j, age: 2m7.494209233s

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:283: (dbg) Run:  kubectl --context addons-20201109132301-342799 top pods -n kube-system
addons_test.go:283: (dbg) Non-zero exit: kubectl --context addons-20201109132301-342799 top pods -n kube-system: exit status 1 (825.685166ms)

                                                
                                                
** stderr ** 
	W1109 13:26:03.180472  369818 top_pod.go:265] Metrics not available for pod kube-system/coredns-f9fd979d6-6sj4j, age: 2m16.180453188s
	error: Metrics not available for pod kube-system/coredns-f9fd979d6-6sj4j, age: 2m16.180453188s

                                                
                                                
** /stderr **
addons_test.go:283: (dbg) Run:  kubectl --context addons-20201109132301-342799 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:301: (dbg) Done: out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable metrics-server --alsologtostderr -v=1: (1.709100066s)
--- PASS: TestAddons/parallel/MetricsServer (36.51s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.3s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:319: tiller-deploy stabilized in 68.089568ms
addons_test.go:321: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:333: "tiller-deploy-565984b594-frbdj" [3ac91d9d-f85f-424f-99b8-0344748f5954] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:321: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.220357513s
addons_test.go:336: (dbg) Run:  kubectl --context addons-20201109132301-342799 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:336: (dbg) Done: kubectl --context addons-20201109132301-342799 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (5.690120595s)
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable helm-tiller --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:353: (dbg) Done: out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable helm-tiller --alsologtostderr -v=1: (1.312628099s)
--- PASS: TestAddons/parallel/HelmTiller (12.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (125.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:434: csi-hostpath-driver pods stabilized in 22.243232ms
addons_test.go:437: (dbg) Run:  kubectl --context addons-20201109132301-342799 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:442: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:383: (dbg) Run:  kubectl --context addons-20201109132301-342799 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:447: (dbg) Run:  kubectl --context addons-20201109132301-342799 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:452: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:333: "task-pv-pod" [6d2ec59c-ec9e-4cf8-aca5-bfc78768c759] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:333: "task-pv-pod" [6d2ec59c-ec9e-4cf8-aca5-bfc78768c759] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:333: "task-pv-pod" [6d2ec59c-ec9e-4cf8-aca5-bfc78768c759] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:452: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 43.010050035s
addons_test.go:457: (dbg) Run:  kubectl --context addons-20201109132301-342799 create -f testdata/csi-hostpath-driver/snapshotclass.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:463: (dbg) Run:  kubectl --context addons-20201109132301-342799 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:468: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:416: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201109132301-342799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:473: (dbg) Run:  kubectl --context addons-20201109132301-342799 delete pod task-pv-pod
addons_test.go:473: (dbg) Done: kubectl --context addons-20201109132301-342799 delete pod task-pv-pod: (2.244170066s)
addons_test.go:479: (dbg) Run:  kubectl --context addons-20201109132301-342799 delete pvc hpvc
addons_test.go:485: (dbg) Run:  kubectl --context addons-20201109132301-342799 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:490: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:383: (dbg) Run:  kubectl --context addons-20201109132301-342799 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:383: (dbg) Run:  kubectl --context addons-20201109132301-342799 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:495: (dbg) Run:  kubectl --context addons-20201109132301-342799 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:500: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:333: "task-pv-pod-restore" [9bc8ca94-3d1f-47d2-a894-6d3d2ebe7e53] Pending
helpers_test.go:333: "task-pv-pod-restore" [9bc8ca94-3d1f-47d2-a894-6d3d2ebe7e53] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:333: "task-pv-pod-restore" [9bc8ca94-3d1f-47d2-a894-6d3d2ebe7e53] Running
addons_test.go:500: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 16.007836349s
addons_test.go:505: (dbg) Run:  kubectl --context addons-20201109132301-342799 delete pod task-pv-pod-restore
addons_test.go:505: (dbg) Done: kubectl --context addons-20201109132301-342799 delete pod task-pv-pod-restore: (1.782195888s)
addons_test.go:509: (dbg) Run:  kubectl --context addons-20201109132301-342799 delete pvc hpvc-restore
addons_test.go:513: (dbg) Run:  kubectl --context addons-20201109132301-342799 delete volumesnapshot new-snapshot-demo
addons_test.go:517: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:517: (dbg) Done: out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable csi-hostpath-driver --alsologtostderr -v=1: (5.895887315s)
addons_test.go:521: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:521: (dbg) Done: out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable volumesnapshots --alsologtostderr -v=1: (1.020566141s)
--- PASS: TestAddons/parallel/CSI (125.33s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (47.73s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:531: (dbg) Run:  kubectl --context addons-20201109132301-342799 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:537: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [c4ded12d-46d8-4ef6-a2c9-2dc477c1c63e] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:333: "busybox" [c4ded12d-46d8-4ef6-a2c9-2dc477c1c63e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:333: "busybox" [c4ded12d-46d8-4ef6-a2c9-2dc477c1c63e] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:537: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 11.063851099s
addons_test.go:543: (dbg) Run:  kubectl --context addons-20201109132301-342799 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:555: (dbg) Run:  kubectl --context addons-20201109132301-342799 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:578: (dbg) Run:  kubectl --context addons-20201109132301-342799 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:589: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20201109132301-342799 addons disable gcp-auth --alsologtostderr -v=1: (34.287766892s)
--- PASS: TestAddons/parallel/GCPAuth (47.73s)

                                                
                                    
x
+
TestCertOptions (126.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20201109133858-342799 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker 

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20201109133858-342799 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker : (2m1.241497068s)
cert_options_test.go:57: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20201109133858-342799 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:72: (dbg) Run:  kubectl --context cert-options-20201109133858-342799 config view
helpers_test.go:171: Cleaning up "cert-options-20201109133858-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20201109133858-342799
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20201109133858-342799: (4.297393212s)
--- PASS: TestCertOptions (126.33s)

                                                
                                    
x
+
TestDockerFlags (90.08s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20201109134422-342799 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20201109134422-342799 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (1m25.996961003s)
docker_test.go:46: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20201109134422-342799 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:57: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20201109134422-342799 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:171: Cleaning up "docker-flags-20201109134422-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20201109134422-342799
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20201109134422-342799: (3.282682776s)
--- PASS: TestDockerFlags (90.08s)

                                                
                                    
x
+
TestForceSystemdFlag (96.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20201109134221-342799 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20201109134221-342799 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=docker : (1m31.286019961s)
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20201109134221-342799 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 -p force-systemd-flag-20201109134221-342799 ssh "docker info --format {{.CgroupDriver}}": (1.289240002s)
helpers_test.go:171: Cleaning up "force-systemd-flag-20201109134221-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20201109134221-342799
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20201109134221-342799: (4.293602057s)
--- PASS: TestForceSystemdFlag (96.87s)

                                                
                                    
x
+
TestForceSystemdEnv (106.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20201109134235-342799 --memory=1800 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20201109134235-342799 --memory=1800 --alsologtostderr -v=5 --driver=docker : (1m36.026896804s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20201109134235-342799 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:171: Cleaning up "force-systemd-env-20201109134235-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20201109134235-342799
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20201109134235-342799: (10.185953205s)
--- PASS: TestForceSystemdEnv (106.85s)

                                                
                                    
x
+
TestErrorSpam (89.11s)

                                                
                                                
=== RUN   TestErrorSpam
=== PAUSE TestErrorSpam

                                                
                                                

                                                
                                                
=== CONT  TestErrorSpam
error_spam_test.go:62: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20201109134106-342799 -n=1 --memory=2250 --wait=false --driver=docker 

                                                
                                                
=== CONT  TestErrorSpam
error_spam_test.go:62: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20201109134106-342799 -n=1 --memory=2250 --wait=false --driver=docker : (1m25.034221549s)
helpers_test.go:171: Cleaning up "nospam-20201109134106-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p nospam-20201109134106-342799

                                                
                                                
=== CONT  TestErrorSpam
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p nospam-20201109134106-342799: (4.078862502s)
--- PASS: TestErrorSpam (89.11s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:977: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/files/etc/test/nested/copy/342799/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20201109132758-342799 --memory=4000 --apiserver-port=8441 --wait=true --driver=docker 
functional_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p functional-20201109132758-342799 --memory=4000 --apiserver-port=8441 --wait=true --driver=docker : (1m1.211296071s)
--- PASS: TestFunctional/serial/StartWithProxy (61.21s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (47.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20201109132758-342799 --alsologtostderr -v=8
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 start -p functional-20201109132758-342799 --alsologtostderr -v=8: (47.780067092s)
functional_test.go:265: soft start took 47.781230019s for "functional-20201109132758-342799" cluster.
--- PASS: TestFunctional/serial/SoftStart (47.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:282: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.47s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:295: (dbg) Run:  kubectl --context functional-20201109132758-342799 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:501: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 cache add k8s.gcr.io/pause:3.1
functional_test.go:501: (dbg) Done: out/minikube-linux-amd64 -p functional-20201109132758-342799 cache add k8s.gcr.io/pause:3.1: (1.339366428s)
functional_test.go:501: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 cache add k8s.gcr.io/pause:3.3
functional_test.go:501: (dbg) Done: out/minikube-linux-amd64 -p functional-20201109132758-342799 cache add k8s.gcr.io/pause:3.3: (1.373024829s)
functional_test.go:501: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 cache add k8s.gcr.io/pause:latest
functional_test.go:501: (dbg) Done: out/minikube-linux-amd64 -p functional-20201109132758-342799 cache add k8s.gcr.io/pause:latest: (1.503139702s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:526: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20201109132758-342799 /tmp/functional-20201109132758-342799529222008
functional_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 cache add minikube-local-cache-test:functional-20201109132758-342799
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:538: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:545: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:558: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:571: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:577: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:577: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (339.89865ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 cache reload
functional_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p functional-20201109132758-342799 cache reload: (1.206702977s)
functional_test.go:587: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:596: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:596: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 kubectl -- --context functional-20201109132758-342799 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:332: (dbg) Run:  out/kubectl --context functional-20201109132758-342799 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.40s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutputError (0.56s)

                                                
                                                
=== RUN   TestJSONOutputError
json_output_test.go:134: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20201109133113-342799 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:134: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20201109133113-342799 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (120.719592ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20201109133113-342799] minikube v1.14.2 on Debian 9.13","name":"Initial Minikube Setup","totalsteps":"12"},"datacontenttype":"application/json","id":"579b7cd9-81af-4c47-8c50-b34eb1b92e91","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/kubeconfig"},"datacontenttype":"application/json","id":"b92196eb-f606-4932-a4fc-05f5c1e83f9d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"849d104e-7bda-4505-ac42-6d7c985100c5","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube"},"datacontenttype":"application/json","id":"2d880ac5-63bd-4bf0-8817-b998f668ef1b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=9627"},"datacontenttype":"application/json","id":"09c448d0-0cf4-4df1-a3bf-f5f1b3c67219","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"d06ba3fa-a1b5-4ae0-936b-ffdd7912123d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:171: Cleaning up "json-output-error-20201109133113-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20201109133113-342799
--- PASS: TestJSONOutputError (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:68: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20201109133113-342799 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:68: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20201109133113-342799 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m15.942517167s)
multinode_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20201109133113-342799 -v 3 --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20201109133113-342799 -v 3 --alsologtostderr: (26.885327276s)
multinode_test.go:98: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 status --alsologtostderr
multinode_test.go:98: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201109133113-342799 status --alsologtostderr: (1.051739423s)
--- PASS: TestMultiNode/serial/AddNode (27.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 node stop m03
multinode_test.go:114: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201109133113-342799 node stop m03: (1.420982585s)
multinode_test.go:120: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 status
multinode_test.go:120: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201109133113-342799 status: exit status 7 (732.481148ms)

                                                
                                                
-- stdout --
	multinode-20201109133113-342799
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20201109133113-342799-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20201109133113-342799-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 status --alsologtostderr
multinode_test.go:127: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201109133113-342799 status --alsologtostderr: exit status 7 (735.908491ms)

                                                
                                                
-- stdout --
	multinode-20201109133113-342799
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20201109133113-342799-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20201109133113-342799-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:33:00.568008  405031 out.go:185] Setting OutFile to fd 1 ...
	I1109 13:33:00.568361  405031 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:00.568377  405031 out.go:198] Setting ErrFile to fd 2...
	I1109 13:33:00.568382  405031 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:00.568500  405031 root.go:279] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/bin
	I1109 13:33:00.568735  405031 mustload.go:66] Loading cluster: multinode-20201109133113-342799
	I1109 13:33:00.569097  405031 status.go:222] checking status of multinode-20201109133113-342799 ...
	I1109 13:33:00.569641  405031 cli_runner.go:110] Run: docker container inspect multinode-20201109133113-342799 --format={{.State.Status}}
	I1109 13:33:00.620757  405031 status.go:294] multinode-20201109133113-342799 host status = "Running" (err=<nil>)
	I1109 13:33:00.620809  405031 host.go:66] Checking if "multinode-20201109133113-342799" exists ...
	I1109 13:33:00.621162  405031 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20201109133113-342799
	I1109 13:33:00.673397  405031 host.go:66] Checking if "multinode-20201109133113-342799" exists ...
	I1109 13:33:00.673840  405031 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:33:00.673888  405031 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20201109133113-342799
	I1109 13:33:00.726415  405031 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32987 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/multinode-20201109133113-342799/id_rsa Username:docker}
	I1109 13:33:00.828631  405031 ssh_runner.go:148] Run: systemctl --version
	I1109 13:33:00.834380  405031 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:33:00.850120  405031 kubeconfig.go:93] found "multinode-20201109133113-342799" server: "https://192.168.59.16:8443"
	I1109 13:33:00.850163  405031 api_server.go:146] Checking apiserver status ...
	I1109 13:33:00.850250  405031 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:33:00.876130  405031 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/2146/cgroup
	I1109 13:33:00.888087  405031 api_server.go:162] apiserver freezer: "7:freezer:/docker/2d88f32fc1cbc677aae59c40df7bef760f638574749417dde678179ede02a27e/kubepods/burstable/pod9dd5561cfff0f3c9ec8590732cfa485d/35ba7d47f235eb2a32906cabd6ef0fc48b8500da00c94b2c51ede3b012a33f70"
	I1109 13:33:00.888176  405031 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/2d88f32fc1cbc677aae59c40df7bef760f638574749417dde678179ede02a27e/kubepods/burstable/pod9dd5561cfff0f3c9ec8590732cfa485d/35ba7d47f235eb2a32906cabd6ef0fc48b8500da00c94b2c51ede3b012a33f70/freezer.state
	I1109 13:33:00.898308  405031 api_server.go:184] freezer state: "THAWED"
	I1109 13:33:00.898353  405031 api_server.go:221] Checking apiserver healthz at https://192.168.59.16:8443/healthz ...
	I1109 13:33:00.905559  405031 api_server.go:241] https://192.168.59.16:8443/healthz returned 200:
	ok
	I1109 13:33:00.905592  405031 status.go:369] multinode-20201109133113-342799 apiserver status = Running (err=<nil>)
	I1109 13:33:00.905604  405031 status.go:224] multinode-20201109133113-342799 status: &{Name:multinode-20201109133113-342799 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false}
	I1109 13:33:00.905641  405031 status.go:222] checking status of multinode-20201109133113-342799-m02 ...
	I1109 13:33:00.905941  405031 cli_runner.go:110] Run: docker container inspect multinode-20201109133113-342799-m02 --format={{.State.Status}}
	I1109 13:33:00.957712  405031 status.go:294] multinode-20201109133113-342799-m02 host status = "Running" (err=<nil>)
	I1109 13:33:00.957745  405031 host.go:66] Checking if "multinode-20201109133113-342799-m02" exists ...
	I1109 13:33:00.958117  405031 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20201109133113-342799-m02
	I1109 13:33:01.021709  405031 host.go:66] Checking if "multinode-20201109133113-342799-m02" exists ...
	I1109 13:33:01.022136  405031 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:33:01.022191  405031 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20201109133113-342799-m02
	I1109 13:33:01.077747  405031 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32991 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/machines/multinode-20201109133113-342799-m02/id_rsa Username:docker}
	I1109 13:33:01.172372  405031 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:33:01.185482  405031 status.go:224] multinode-20201109133113-342799-m02 status: &{Name:multinode-20201109133113-342799-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true}
	I1109 13:33:01.185532  405031 status.go:222] checking status of multinode-20201109133113-342799-m03 ...
	I1109 13:33:01.185834  405031 cli_runner.go:110] Run: docker container inspect multinode-20201109133113-342799-m03 --format={{.State.Status}}
	I1109 13:33:01.244152  405031 status.go:294] multinode-20201109133113-342799-m03 host status = "Stopped" (err=<nil>)
	I1109 13:33:01.244211  405031 status.go:307] host is not running, skipping remaining checks
	I1109 13:33:01.244221  405031 status.go:224] multinode-20201109133113-342799-m03 status: &{Name:multinode-20201109133113-342799-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.89s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (59.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:147: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 node start m03 --alsologtostderr
multinode_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201109133113-342799 node start m03 --alsologtostderr: (57.776523622s)
multinode_test.go:164: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 status
multinode_test.go:178: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (59.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 node delete m03
multinode_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201109133113-342799 node delete m03: (5.425609961s)
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 status --alsologtostderr
multinode_test.go:285: (dbg) Run:  docker volume ls
multinode_test.go:295: (dbg) Run:  kubectl get nodes
multinode_test.go:303: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (7.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:186: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 stop
multinode_test.go:186: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201109133113-342799 stop: (7.504315787s)
multinode_test.go:192: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 status
multinode_test.go:192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201109133113-342799 status: exit status 7 (168.959599ms)

                                                
                                                
-- stdout --
	multinode-20201109133113-342799
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20201109133113-342799-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 status --alsologtostderr
multinode_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201109133113-342799 status --alsologtostderr: exit status 7 (175.591149ms)

                                                
                                                
-- stdout --
	multinode-20201109133113-342799
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20201109133113-342799-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:34:14.783918  410820 out.go:185] Setting OutFile to fd 1 ...
	I1109 13:34:14.784224  410820 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1109 13:34:14.784237  410820 out.go:198] Setting ErrFile to fd 2...
	I1109 13:34:14.784242  410820 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1109 13:34:14.784350  410820 root.go:279] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/bin
	I1109 13:34:14.784564  410820 mustload.go:66] Loading cluster: multinode-20201109133113-342799
	I1109 13:34:14.784916  410820 status.go:222] checking status of multinode-20201109133113-342799 ...
	I1109 13:34:14.785442  410820 cli_runner.go:110] Run: docker container inspect multinode-20201109133113-342799 --format={{.State.Status}}
	I1109 13:34:14.842076  410820 status.go:294] multinode-20201109133113-342799 host status = "Stopped" (err=<nil>)
	I1109 13:34:14.842129  410820 status.go:307] host is not running, skipping remaining checks
	I1109 13:34:14.842140  410820 status.go:224] multinode-20201109133113-342799 status: &{Name:multinode-20201109133113-342799 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false}
	I1109 13:34:14.842192  410820 status.go:222] checking status of multinode-20201109133113-342799-m02 ...
	I1109 13:34:14.842542  410820 cli_runner.go:110] Run: docker container inspect multinode-20201109133113-342799-m02 --format={{.State.Status}}
	I1109 13:34:14.895872  410820 status.go:294] multinode-20201109133113-342799-m02 host status = "Stopped" (err=<nil>)
	I1109 13:34:14.895909  410820 status.go:307] host is not running, skipping remaining checks
	I1109 13:34:14.895919  410820 status.go:224] multinode-20201109133113-342799-m02 status: &{Name:multinode-20201109133113-342799-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (7.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (95.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:215: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20201109133113-342799 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:225: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20201109133113-342799 --wait=true -v=8 --alsologtostderr --driver=docker : (1m34.312358758s)
multinode_test.go:231: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201109133113-342799 status --alsologtostderr
multinode_test.go:245: (dbg) Run:  kubectl get nodes
multinode_test.go:253: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (95.21s)

                                                
                                    
x
+
TestPreload (109.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20201109133555-342799 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20201109133555-342799 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: (1m8.591334501s)
preload_test.go:50: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20201109133555-342799 -- docker pull busybox
preload_test.go:50: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20201109133555-342799 -- docker pull busybox: (1.02383919s)
preload_test.go:60: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20201109133555-342799 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3
preload_test.go:60: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20201109133555-342799 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3: (36.390750538s)
preload_test.go:64: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20201109133555-342799 -- docker images
helpers_test.go:171: Cleaning up "test-preload-20201109133555-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20201109133555-342799
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20201109133555-342799: (3.042005452s)
--- PASS: TestPreload (109.44s)

                                                
                                    
x
+
TestInsufficientStorage (12.9s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20201109133845-342799 --output=json --wait=true --driver=docker 
status_test.go:49: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20201109133845-342799 --output=json --wait=true --driver=docker : exit status 26 (9.137880271s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20201109133845-342799] minikube v1.14.2 on Debian 9.13","name":"Initial Minikube Setup","totalsteps":"12"},"datacontenttype":"application/json","id":"ab8bc3c8-ba97-40f4-a56e-bde31a70bd29","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/kubeconfig"},"datacontenttype":"application/json","id":"7c5aeeb3-c017-4062-bbf8-2b67d6e416af","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"7e90b013-685a-4644-b421-905b79fc5cb8","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube"},"datacontenttype":"application/json","id":"1a80fa24-b902-47c9-8379-c448e88de0bf","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=9627"},"datacontenttype":"application/json","id":"8efb1821-78a1-495d-a00a-1dc92287640d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"2328acb0-3df3-420c-9395-a7d3c5281935","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"12"},"datacontenttype":"application/json","id":"0fea4ec1-4752-4262-8512-83e1b32c1e7b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20201109133845-342799 in cluster insufficient-storage-20201109133845-342799","name":"Starting Node","totalsteps":"12"},"datacontenttype":"application/json","id":"108d58df-b3ae-4ddd-99d1-cdcc8887265c","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"6","message":"Creating docker container (CPUs=2, Memory=7500MB) ...","name":"Creating Container","totalsteps":"12"},"datacontenttype":"application/json","id":"772cd947-8d1e-454f-8be6-29b9e303bc52","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try at least one of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused docker data\n\t\t\t2. Increase the amount of memory allocated to Docker for Desktop via \n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"e0c4ec30-a1eb-4bdf-bef9-0bb9b91e161b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:75: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20201109133845-342799 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20201109133845-342799 --output=json --layout=cluster: exit status 7 (407.782418ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20201109133845-342799","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=7500MB) ...","BinaryVersion":"v1.14.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":""}},"Nodes":[{"Name":"insufficient-storage-20201109133845-342799","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:38:54.950521  436604 status.go:363] kubeconfig endpoint: extract IP: "insufficient-storage-20201109133845-342799" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/kubeconfig

                                                
                                                
** /stderr **
status_test.go:75: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20201109133845-342799 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20201109133845-342799 --output=json --layout=cluster: exit status 7 (475.446007ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20201109133845-342799","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.14.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":""}},"Nodes":[{"Name":"insufficient-storage-20201109133845-342799","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 13:38:55.427296  436668 status.go:363] kubeconfig endpoint: extract IP: "insufficient-storage-20201109133845-342799" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/kubeconfig
	E1109 13:38:55.445677  436668 status.go:502] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/profiles/insufficient-storage-20201109133845-342799/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:171: Cleaning up "insufficient-storage-20201109133845-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20201109133845-342799
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20201109133845-342799: (2.877285336s)
--- PASS: TestInsufficientStorage (12.90s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (203.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:95: (dbg) Run:  /tmp/minikube-v1.9.0.048941418.exe start -p running-upgrade-20201109133858-342799 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:105: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20201109133858-342799 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m10.842085098s)
helpers_test.go:171: Cleaning up "running-upgrade-20201109133858-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20201109133858-342799
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20201109133858-342799: (4.289919476s)
--- PASS: TestRunningBinaryUpgrade (203.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (119.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:142: (dbg) Run:  /tmp/minikube-v1.8.0.053836320.exe start -p stopped-upgrade-20201109134422-342799 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:142: (dbg) Done: /tmp/minikube-v1.8.0.053836320.exe start -p stopped-upgrade-20201109134422-342799 --memory=2200 --vm-driver=docker : (1m20.066254318s)
version_upgrade_test.go:151: (dbg) Run:  /tmp/minikube-v1.8.0.053836320.exe -p stopped-upgrade-20201109134422-342799 stop
version_upgrade_test.go:151: (dbg) Done: /tmp/minikube-v1.8.0.053836320.exe -p stopped-upgrade-20201109134422-342799 stop: (2.26232977s)
version_upgrade_test.go:157: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20201109134422-342799 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:157: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20201109134422-342799 --memory=2200 --alsologtostderr -v=1 --driver=docker : (32.115584691s)
helpers_test.go:171: Cleaning up "stopped-upgrade-20201109134422-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p stopped-upgrade-20201109134422-342799

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20201109134422-342799: (4.295132165s)
--- PASS: TestStoppedBinaryUpgrade (119.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (324.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:172: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201109133858-342799 --memory=2200 --kubernetes-version=v1.13.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:177: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20201109133858-342799: (2.507234737s)
version_upgrade_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20201109133858-342799 status --format={{.Host}}
version_upgrade_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20201109133858-342799 status --format={{.Host}}: exit status 7 (131.531763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:184: status error: exit status 7 (may be ok)
version_upgrade_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201109133858-342799 --memory=2200 --kubernetes-version=v1.19.2 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20201109133858-342799 --memory=2200 --kubernetes-version=v1.19.2 --alsologtostderr -v=1 --driver=docker : (1m24.452520428s)
version_upgrade_test.go:198: (dbg) Run:  kubectl --context kubernetes-upgrade-20201109133858-342799 version --output=json
version_upgrade_test.go:217: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201109133858-342799 --memory=2200 --kubernetes-version=v1.13.0 --driver=docker 
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20201109133858-342799 --memory=2200 --kubernetes-version=v1.13.0 --driver=docker : exit status 106 (156.193886ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20201109133858-342799] minikube v1.14.2 on Debian 9.13
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube
	  - MINIKUBE_LOCATION=9627
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.19.2 cluster to v1.13.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.13.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20201109133858-342799
	    minikube start -p kubernetes-upgrade-20201109133858-342799 --kubernetes-version=v1.13.0
	    
	    2) Create a second cluster with Kubernetes 1.13.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20201109133858-3427992 --kubernetes-version=v1.13.0
	    
	    3) Use the existing cluster at version Kubernetes 1.19.2, by running:
	    
	    minikube start -p kubernetes-upgrade-20201109133858-342799 --kubernetes-version=v1.19.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:223: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201109133858-342799 --memory=2200 --kubernetes-version=v1.19.2 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:225: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20201109133858-342799 --memory=2200 --kubernetes-version=v1.19.2 --alsologtostderr -v=1 --driver=docker : (1m39.281570855s)
helpers_test.go:171: Cleaning up "kubernetes-upgrade-20201109133858-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20201109133858-342799

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20201109133858-342799: (11.174574954s)
--- PASS: TestKubernetesUpgrade (324.51s)

                                                
                                    
x
+
TestMissingContainerUpgrade (425.03s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:259: (dbg) Run:  /tmp/minikube-v1.9.1.379609221.exe start -p missing-upgrade-20201109134358-342799 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:259: (dbg) Done: /tmp/minikube-v1.9.1.379609221.exe start -p missing-upgrade-20201109134358-342799 --memory=2200 --driver=docker : (1m36.518278558s)
version_upgrade_test.go:268: (dbg) Run:  docker stop missing-upgrade-20201109134358-342799

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:268: (dbg) Done: docker stop missing-upgrade-20201109134358-342799: (10.574389573s)
version_upgrade_test.go:273: (dbg) Run:  docker rm missing-upgrade-20201109134358-342799
version_upgrade_test.go:279: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20201109134358-342799 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:279: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20201109134358-342799 --memory=2200 --alsologtostderr -v=1 --driver=docker : (5m13.648257984s)
helpers_test.go:171: Cleaning up "missing-upgrade-20201109134358-342799" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20201109134358-342799

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20201109134358-342799: (3.801006902s)
--- PASS: TestMissingContainerUpgrade (425.03s)

                                                
                                    
x
+
TestPause/serial/Start (175.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20201109133858-342799 --memory=1800 --install-addons=false --wait=all --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p pause-20201109133858-342799 --memory=1800 --install-addons=false --wait=all --driver=docker : (2m55.025815033s)
--- PASS: TestPause/serial/Start (175.03s)

                                                
                                    
x
+
TestFunctional/parallel/ComponentHealth (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ComponentHealth
=== PAUSE TestFunctional/parallel/ComponentHealth

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ComponentHealth
functional_test.go:351: (dbg) Run:  kubectl --context functional-20201109132758-342799 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:363: etcd phase: Running
functional_test.go:363: control-plane phase: Running
functional_test.go:363: control-plane phase: Running
functional_test.go:363: kube-apiserver phase: Running
functional_test.go:363: kube-controller-manager phase: Running
functional_test.go:363: control-plane phase: Running
functional_test.go:363: kube-scheduler phase: Running
functional_test.go:363: control-plane phase: Running
--- PASS: TestFunctional/parallel/ComponentHealth (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:622: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 config unset cpus
functional_test.go:622: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 config get cpus
functional_test.go:622: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20201109132758-342799 config get cpus: exit status 14 (65.128162ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:622: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 config set cpus 2
functional_test.go:622: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 config get cpus
functional_test.go:622: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 config unset cpus
functional_test.go:622: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 config get cpus
functional_test.go:622: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20201109132758-342799 config get cpus: exit status 14 (64.887539ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:428: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url -p functional-20201109132758-342799 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:433: (dbg) stopping [out/minikube-linux-amd64 dashboard --url -p functional-20201109132758-342799 --alsologtostderr -v=1] ...
helpers_test.go:497: unable to kill pid 673311: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:473: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20201109132758-342799 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:473: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20201109132758-342799 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (350.096246ms)

                                                
                                                
-- stdout --
	* [functional-20201109132758-342799] minikube v1.14.2 on Debian 9.13
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube
	  - MINIKUBE_LOCATION=9627
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:59:53.234630  672753 out.go:185] Setting OutFile to fd 1 ...
	I1109 13:59:53.234996  672753 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1109 13:59:53.235014  672753 out.go:198] Setting ErrFile to fd 2...
	I1109 13:59:53.235020  672753 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1109 13:59:53.235331  672753 root.go:279] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube/bin
	I1109 13:59:53.235776  672753 out.go:192] Setting JSON to false
	I1109 13:59:53.300375  672753 start.go:103] hostinfo: {"hostname":"kic-integration-slave8","uptime":6143,"bootTime":1604953050,"procs":289,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-14-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"ae41e7f6-8b8e-4d40-b77d-1ebb5a2d5fdb"}
	I1109 13:59:53.301133  672753 start.go:113] virtualization: kvm host
	I1109 13:59:53.304505  672753 out.go:110] * [functional-20201109132758-342799] minikube v1.14.2 on Debian 9.13
	I1109 13:59:53.307647  672753 out.go:110]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/kubeconfig
	I1109 13:59:53.312091  672753 out.go:110]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 13:59:53.315781  672753 out.go:110]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9627-340169-28044cddb5b825dc6c4e07ed62c91708294461e9/.minikube
	I1109 13:59:53.318645  672753 out.go:110]   - MINIKUBE_LOCATION=9627
	I1109 13:59:53.320067  672753 driver.go:288] Setting default libvirt URI to qemu:///system
	I1109 13:59:53.384719  672753 docker.go:117] docker version: linux-19.03.13
	I1109 13:59:53.384898  672753 cli_runner.go:110] Run: docker system info --format "{{json .}}"
	I1109 13:59:53.508139  672753 info.go:253] docker info: {ID:F6IX:ZLDR:GSU5:57QV:GUUZ:QOCT:V5VG:5GRC:MXPB:2JZB:PBMT:ABFJ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:802 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2020-11-09 13:59:53.434161673 -0800 PST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-14-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628288000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kic-integration-slave8 Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1109 13:59:53.508249  672753 docker.go:147] overlay module found
	I1109 13:59:53.511018  672753 out.go:110] * Using the docker driver based on existing profile
	I1109 13:59:53.511060  672753 start.go:272] selected driver: docker
	I1109 13:59:53.511070  672753 start.go:680] validating driver "docker" against &{Name:functional-20201109132758-342799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14-snapshot@sha256:1e303d96e9d72371235cb28ed77f9b3ba67fb4966085202238e635f1d80181f8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:functional-20201109132758-342799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.16 Port:8441 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true apps_running:true default_sa:true kubelet:true system_pods:true] StartHostTimeout:6m0s ExposedPort
s:[]}
	I1109 13:59:53.511260  672753 start.go:691] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
	I1109 13:59:53.514578  672753 out.go:110] 
	W1109 13:59:53.514864  672753 out.go:146] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 953MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 953MB
	I1109 13:59:53.520367  672753 out.go:110] 

                                                
                                                
** /stderr **
functional_test.go:484: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20201109132758-342799 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:383: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/LogsCmd (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsCmd
=== PAUSE TestFunctional/parallel/LogsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 logs

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p functional-20201109132758-342799 logs: (4.280625969s)
--- PASS: TestFunctional/parallel/LogsCmd (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (5.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
fn_mount_cmd_test.go:72: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20201109132758-342799 /tmp/mounttest925323007:/mount-9p --alsologtostderr -v=1]
fn_mount_cmd_test.go:106: wrote "test-1604959182410058819" to /tmp/mounttest925323007/created-by-test
fn_mount_cmd_test.go:106: wrote "test-1604959182410058819" to /tmp/mounttest925323007/created-by-test-removed-by-pod
fn_mount_cmd_test.go:106: wrote "test-1604959182410058819" to /tmp/mounttest925323007/test-1604959182410058819
fn_mount_cmd_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh "findmnt -T /mount-9p | grep 9p"
fn_mount_cmd_test.go:114: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (455.819927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
fn_mount_cmd_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh "findmnt -T /mount-9p | grep 9p"
fn_mount_cmd_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh -- ls -la /mount-9p
fn_mount_cmd_test.go:132: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  9 21:59 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  9 21:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  9 21:59 test-1604959182410058819
fn_mount_cmd_test.go:136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh cat /mount-9p/test-1604959182410058819
fn_mount_cmd_test.go:147: (dbg) Run:  kubectl --context functional-20201109132758-342799 replace --force -f testdata/busybox-mount-test.yaml
fn_mount_cmd_test.go:152: (dbg) TestFunctional/parallel/MountCmd: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:333: "busybox-mount" [9b482b93-6398-4987-b741-616b24148c0f] Pending
helpers_test.go:333: "busybox-mount" [9b482b93-6398-4987-b741-616b24148c0f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
helpers_test.go:333: "busybox-mount" [9b482b93-6398-4987-b741-616b24148c0f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
fn_mount_cmd_test.go:152: (dbg) TestFunctional/parallel/MountCmd: integration-test=busybox-mount healthy within 2.008023934s
fn_mount_cmd_test.go:168: (dbg) Run:  kubectl --context functional-20201109132758-342799 logs busybox-mount
fn_mount_cmd_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh stat /mount-9p/created-by-test
fn_mount_cmd_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh stat /mount-9p/created-by-pod
fn_mount_cmd_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh "sudo umount -f /mount-9p"
fn_mount_cmd_test.go:93: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20201109132758-342799 /tmp/mounttest925323007:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd (5.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (15.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:755: (dbg) Run:  kubectl --context functional-20201109132758-342799 create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
functional_test.go:759: (dbg) Run:  kubectl --context functional-20201109132758-342799 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:764: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:333: "hello-node-7567d9fdc9-vpfcr" [58d55ff3-bab9-4dbc-975f-96dc4d1ba72f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:333: "hello-node-7567d9fdc9-vpfcr" [58d55ff3-bab9-4dbc-975f-96dc4d1ba72f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:764: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 13.008432272s
functional_test.go:768: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 service list
functional_test.go:781: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:790: found endpoint: https://192.168.49.16:32594
functional_test.go:801: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:810: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:816: found endpoint for hello-node: http://192.168.49.16:32594
functional_test.go:827: Attempting to fetch http://192.168.49.16:32594 ...
functional_test.go:846: http://192.168.49.16:32594: success! body:
CLIENT VALUES:
client_address=172.17.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://192.168.49.16:8080/

                                                
                                                
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

                                                
                                                
HEADERS RECEIVED:
accept-encoding=gzip
host=192.168.49.16:32594
user-agent=Go-http-client/1.1
BODY:
-no body in request-
--- PASS: TestFunctional/parallel/ServiceCmd (15.09s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:861: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 addons list
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (157.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:43: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:318: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.16:8441: connect: connection refused
helpers_test.go:318: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.16:8441: connect: connection refused
helpers_test.go:318: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.16:8441: connect: connection refused
helpers_test.go:318: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.16:8441: connect: connection refused
helpers_test.go:318: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.16:8441: connect: connection refused
helpers_test.go:318: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.16:8441: connect: connection refused
helpers_test.go:318: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.16:8441: connect: connection refused
helpers_test.go:318: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.16:8441: connect: connection refused
helpers_test.go:318: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.16:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.16:8441: connect: connection refused
helpers_test.go:333: "storage-provisioner" [8a2dc99d-6b4d-4f0b-9230-ac84422061b4] Running
helpers_test.go:333: "storage-provisioner" [8a2dc99d-6b4d-4f0b-9230-ac84422061b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
fn_pvc_test.go:43: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 19.006762429s
fn_pvc_test.go:48: (dbg) Run:  kubectl --context functional-20201109132758-342799 get storageclass -o=json
fn_pvc_test.go:68: (dbg) Run:  kubectl --context functional-20201109132758-342799 apply -f testdata/storage-provisioner/pvc.yaml
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20201109132758-342799 get pvc myclaim -o=json
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20201109132758-342799 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20201109132758-342799 get pvc myclaim -o=json
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20201109132758-342799 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20201109132758-342799 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20201109132758-342799 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20201109132758-342799 get pvc myclaim -o=json
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20201109132758-342799 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20201109132758-342799 get pvc myclaim -o=json
fn_pvc_test.go:124: (dbg) Run:  kubectl --context functional-20201109132758-342799 apply -f testdata/storage-provisioner/pod.yaml
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:333: "sp-pod" [419d701c-c6c2-41d6-8974-53be5ba6239b] Pending
helpers_test.go:333: "sp-pod" [419d701c-c6c2-41d6-8974-53be5ba6239b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:333: "sp-pod" [419d701c-c6c2-41d6-8974-53be5ba6239b] Running
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.031209108s
fn_pvc_test.go:99: (dbg) Run:  kubectl --context functional-20201109132758-342799 exec sp-pod -- touch /tmp/mount/foo
fn_pvc_test.go:105: (dbg) Run:  kubectl --context functional-20201109132758-342799 delete -f testdata/storage-provisioner/pod.yaml
fn_pvc_test.go:105: (dbg) Done: kubectl --context functional-20201109132758-342799 delete -f testdata/storage-provisioner/pod.yaml: (8.022907639s)
fn_pvc_test.go:124: (dbg) Run:  kubectl --context functional-20201109132758-342799 apply -f testdata/storage-provisioner/pod.yaml
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:333: "sp-pod" [a097ccd9-72db-4541-a1b9-43aaa333624f] Pending
helpers_test.go:333: "sp-pod" [a097ccd9-72db-4541-a1b9-43aaa333624f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:333: "sp-pod" [a097ccd9-72db-4541-a1b9-43aaa333624f] Running
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007332073s
fn_pvc_test.go:113: (dbg) Run:  kubectl --context functional-20201109132758-342799 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (157.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:894: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh "echo hello"
functional_test.go:911: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1025: Checking for existence of /etc/test/nested/copy/342799/hosts within VM
functional_test.go:1026: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh "sudo cat /etc/test/nested/copy/342799/hosts"
functional_test.go:1031: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1066: Checking for existence of /etc/ssl/certs/342799.pem within VM
functional_test.go:1067: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh "sudo cat /etc/ssl/certs/342799.pem"
functional_test.go:1066: Checking for existence of /usr/share/ca-certificates/342799.pem within VM
functional_test.go:1067: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh "sudo cat /usr/share/ca-certificates/342799.pem"
functional_test.go:1066: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1067: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 ssh "sudo cat /etc/ssl/certs/51391683.0"
--- PASS: TestFunctional/parallel/CertSync (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:150: (dbg) Run:  kubectl --context functional-20201109132758-342799 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (130.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20201109134552-342799 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20201109134552-342799 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0: (2m10.038036101s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (130.04s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:103: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20201109133858-342799 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:75: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20201109133858-342799 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20201109133858-342799 --output=json --layout=cluster: exit status 2 (544.994149ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20201109133858-342799","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.14.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":""}},"Nodes":[{"Name":"pause-20201109133858-342799","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.55s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:113: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20201109133858-342799 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:103: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20201109133858-342799 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:103: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20201109133858-342799 --alsologtostderr -v=5: (1.049756893s)
--- PASS: TestPause/serial/PauseAgain (1.05s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.62s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:123: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20201109133858-342799 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:123: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20201109133858-342799 --alsologtostderr -v=5: (4.622287428s)
--- PASS: TestPause/serial/DeletePaused (4.62s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/FirstStart (180.09s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p crio-20201109134622-342799 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p crio-20201109134622-342799 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7: (3m0.090255078s)
--- PASS: TestStartStop/group/crio/serial/FirstStart (180.09s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (9.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:133: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (9.023819075s)
pause_test.go:159: (dbg) Run:  docker ps -a
pause_test.go:164: (dbg) Run:  docker volume inspect pause-20201109133858-342799
pause_test.go:164: (dbg) Non-zero exit: docker volume inspect pause-20201109133858-342799: exit status 1 (57.025137ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20201109133858-342799

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (9.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20201109134632-342799 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.19.2
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20201109134632-342799 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.19.2: (1m11.544628634s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context embed-certs-20201109134632-342799 create -f testdata/busybox.yaml
start_stop_delete_test.go:163: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [5932dd1e-5d09-44e7-ad50-33ba9e6a0f8c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:333: "busybox" [5932dd1e-5d09-44e7-ad50-33ba9e6a0f8c] Running
start_stop_delete_test.go:163: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.023752465s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context embed-certs-20201109134632-342799 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20201109134632-342799 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20201109134632-342799 --alsologtostderr -v=3: (11.328127574s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context old-k8s-version-20201109134552-342799 create -f testdata/busybox.yaml
start_stop_delete_test.go:163: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [3e538e60-22d5-11eb-bca7-0242456c8bb3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:333: "busybox" [3e538e60-22d5-11eb-bca7-0242456c8bb3] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.01469062s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context old-k8s-version-20201109134552-342799 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799: exit status 7 (135.295505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20201109134632-342799
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20201109134632-342799 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.19.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20201109134632-342799 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.19.2: (50.999206796s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20201109134552-342799 --alsologtostderr -v=3
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20201109134552-342799 --alsologtostderr -v=3: (11.516349362s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799: exit status 7 (137.209699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20201109134552-342799
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (56.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20201109134552-342799 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20201109134552-342799 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0: (55.876882023s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (56.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-584f46694c-nqgzr" [1f496dcb-22d1-4c9f-90bb-2b9130632b61] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:333: "kubernetes-dashboard-584f46694c-nqgzr" [1f496dcb-22d1-4c9f-90bb-2b9130632b61] Running
start_stop_delete_test.go:213: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.014467789s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-584f46694c-nqgzr" [1f496dcb-22d1-4c9f-90bb-2b9130632b61] Running
start_stop_delete_test.go:224: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009759997s
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-66766c77dc-f24f5" [6c190efb-22d5-11eb-bfda-02423a01d930] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:333: "kubernetes-dashboard-66766c77dc-f24f5" [6c190efb-22d5-11eb-bfda-02423a01d930] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.016978046s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/DeployApp (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context crio-20201109134622-342799 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/crio/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [47b7583b-cf4b-4cbf-b299-02637d0f414e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/DeployApp
helpers_test.go:333: "busybox" [47b7583b-cf4b-4cbf-b299-02637d0f414e] Running

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/crio/serial/DeployApp: integration-test=busybox healthy within 10.015829218s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context crio-20201109134622-342799 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/crio/serial/DeployApp (10.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20201109134632-342799 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799: exit status 2 (427.458145ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799: exit status 2 (456.169087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20201109134632-342799 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20201109134632-342799 -n embed-certs-20201109134632-342799
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-66766c77dc-f24f5" [6c190efb-22d5-11eb-bfda-02423a01d930] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006639911s
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/FirstStart (107.77s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p containerd-20201109134931-342799 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.19.2

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p containerd-20201109134931-342799 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.19.2: (1m47.772252708s)
--- PASS: TestStartStop/group/containerd/serial/FirstStart (107.77s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/Stop (21.29s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p crio-20201109134622-342799 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p crio-20201109134622-342799 --alsologtostderr -v=3: (21.291719158s)
--- PASS: TestStartStop/group/crio/serial/Stop (21.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20201109134552-342799 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799: exit status 2 (434.84829ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799: exit status 2 (422.435312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20201109134552-342799 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20201109134552-342799 -n old-k8s-version-20201109134552-342799
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (91.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20201109134950-342799 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.19.2

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20201109134950-342799 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.19.2: (1m31.720560971s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (91.72s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201109134622-342799 -n crio-20201109134622-342799
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201109134622-342799 -n crio-20201109134622-342799: exit status 7 (131.105255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p crio-20201109134622-342799
--- PASS: TestStartStop/group/crio/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/SecondStart (56.72s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p crio-20201109134622-342799 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p crio-20201109134622-342799 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7: (56.250762502s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201109134622-342799 -n crio-20201109134622-342799
--- PASS: TestStartStop/group/crio/serial/SecondStart (56.72s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/UserAppExistsAfterStop (15.61s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/crio/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-5ddb79bb9f-ghs7v" [4fab7cfe-a4f7-4fac-aa3e-f4ced316c49d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/UserAppExistsAfterStop
helpers_test.go:333: "kubernetes-dashboard-5ddb79bb9f-ghs7v" [4fab7cfe-a4f7-4fac-aa3e-f4ced316c49d] Running

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/crio/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.6074883s
--- PASS: TestStartStop/group/crio/serial/UserAppExistsAfterStop (15.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20201109135103-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p auto-20201109135103-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --driver=docker : (1m40.933653584s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.93s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/AddonExistsAfterStop (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/crio/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-5ddb79bb9f-ghs7v" [4fab7cfe-a4f7-4fac-aa3e-f4ced316c49d] Running
start_stop_delete_test.go:224: (dbg) TestStartStop/group/crio/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.220335605s
--- PASS: TestStartStop/group/crio/serial/AddonExistsAfterStop (5.22s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/DeployApp (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context containerd-20201109134931-342799 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/containerd/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [4e9b15da-26ba-4f5a-ade1-f17f3f716777] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/DeployApp
helpers_test.go:333: "busybox" [4e9b15da-26ba-4f5a-ade1-f17f3f716777] Running

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/containerd/serial/DeployApp: integration-test=busybox healthy within 9.026396361s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context containerd-20201109134931-342799 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/containerd/serial/DeployApp (10.09s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/Pause (4.5s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p crio-20201109134622-342799 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Done: out/minikube-linux-amd64 pause -p crio-20201109134622-342799 --alsologtostderr -v=1: (1.14470782s)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201109134622-342799 -n crio-20201109134622-342799
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201109134622-342799 -n crio-20201109134622-342799: exit status 2 (487.516831ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20201109134622-342799 -n crio-20201109134622-342799

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/Pause
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20201109134622-342799 -n crio-20201109134622-342799: exit status 2 (516.043667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p crio-20201109134622-342799 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/Pause
start_stop_delete_test.go:238: (dbg) Done: out/minikube-linux-amd64 unpause -p crio-20201109134622-342799 --alsologtostderr -v=1: (1.060638998s)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201109134622-342799 -n crio-20201109134622-342799
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20201109134622-342799 -n crio-20201109134622-342799
--- PASS: TestStartStop/group/crio/serial/Pause (4.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20201109134950-342799 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20201109134950-342799 --alsologtostderr -v=3: (11.430831701s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.43s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/Stop (25.15s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p containerd-20201109134931-342799 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p containerd-20201109134931-342799 --alsologtostderr -v=3: (25.152779333s)
--- PASS: TestStartStop/group/containerd/serial/Stop (25.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (79.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p false-20201109135129-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p false-20201109135129-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=false --driver=docker : (1m19.287472563s)
--- PASS: TestNetworkPlugins/group/false/Start (79.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799: exit status 7 (256.789749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20201109134950-342799
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (58.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20201109134950-342799 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.19.2

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20201109134950-342799 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.19.2: (58.012477853s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (58.49s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799: exit status 7 (150.976974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p containerd-20201109134931-342799
--- PASS: TestStartStop/group/containerd/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/SecondStart (42.3s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p containerd-20201109134931-342799 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.19.2

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p containerd-20201109134931-342799 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.19.2: (41.73836024s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799
--- PASS: TestStartStop/group/containerd/serial/SecondStart (42.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:212: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:223: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/UserAppExistsAfterStop (16.03s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/containerd/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/UserAppExistsAfterStop
helpers_test.go:333: "kubernetes-dashboard-584f46694c-p6z7f" [3f8065b2-207e-42f7-bb41-d677815e9f47] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:333: "kubernetes-dashboard-584f46694c-p6z7f" [3f8065b2-207e-42f7-bb41-d677815e9f47] Running

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/containerd/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.021945335s
--- PASS: TestStartStop/group/containerd/serial/UserAppExistsAfterStop (16.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20201109134950-342799 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799: exit status 2 (394.463493ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799: exit status 2 (392.998207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20201109134950-342799 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20201109134950-342799 -n newest-cni-20201109134950-342799
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20201109135103-342799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context auto-20201109135103-342799 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-jcs8c" [01ed7f45-2552-4736-a63e-e3c6ef8717d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-jcs8c" [01ed7f45-2552-4736-a63e-e3c6ef8717d0] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.012157609s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20201109135129-342799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context false-20201109135129-342799 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-kh7fs" [e10df8fa-f0b4-4bb4-a320-620c06579867] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-kh7fs" [e10df8fa-f0b4-4bb4-a320-620c06579867] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.083279072s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (167.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20201109135250-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20201109135250-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=cilium --driver=docker : (2m47.821020129s)
--- PASS: TestNetworkPlugins/group/cilium/Start (167.82s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/containerd/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-584f46694c-p6z7f" [3f8065b2-207e-42f7-bb41-d677815e9f47] Running

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/containerd/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009939098s
--- PASS: TestStartStop/group/containerd/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (34.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:156: (dbg) Run:  kubectl --context auto-20201109135103-342799 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:156: (dbg) Non-zero exit: kubectl --context auto-20201109135103-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.377004274s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:156: (dbg) Run:  kubectl --context auto-20201109135103-342799 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:156: (dbg) Non-zero exit: kubectl --context auto-20201109135103-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.345584662s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:156: (dbg) Run:  kubectl --context auto-20201109135103-342799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (34.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (386.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (124.506678ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (128.196635ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (121.477216ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (106.151579ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (99.744302ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (179.313305ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (108.06926ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (93.071861ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (171.486613ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (109.672053ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (186.983447ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (108.902931ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (104.309974ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (107.708045ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (386.08s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p containerd-20201109134931-342799 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/Pause
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799: exit status 2 (434.520492ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799: exit status 2 (428.652805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p containerd-20201109134931-342799 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p containerd-20201109134931-342799 -n containerd-20201109134931-342799
--- PASS: TestStartStop/group/containerd/serial/Pause (3.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (156.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20201109135316-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p calico-20201109135316-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=calico --driver=docker : (2m36.429922423s)
--- PASS: TestNetworkPlugins/group/calico/Start (156.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:175: (dbg) Run:  kubectl --context auto-20201109135103-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (7.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:188: (dbg) Run:  kubectl --context auto-20201109135103-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:188: (dbg) Non-zero exit: kubectl --context auto-20201109135103-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (7.008924478s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (7.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (111.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20201109135347-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=testdata/weavenet.yaml --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20201109135347-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=testdata/weavenet.yaml --driver=docker : (1m51.834278518s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (111.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:333: "cilium-wqrlz" [c868bf93-b146-4f37-b8ea-974c4dcfe781] Running / Ready:ContainersNotReady (containers with unready status: [cilium-agent]) / ContainersReady:ContainersNotReady (containers with unready status: [cilium-agent])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.022963852s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20201109135347-342799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (14.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context custom-weave-20201109135347-342799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-9jc2d" [07b9a805-b4da-41f3-adbe-493834097908] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-9jc2d" [07b9a805-b4da-41f3-adbe-493834097908] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 14.008627824s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (14.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20201109135250-342799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (14.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context cilium-20201109135250-342799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-2bxtk" [820a16c4-7059-4250-8f8b-517dfc6a0281] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-2bxtk" [820a16c4-7059-4250-8f8b-517dfc6a0281] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 14.010065837s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (14.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:333: "calico-node-cvtq9" [d564d7e1-08fa-4859-98f7-465c54ad8217] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.022897834s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20201109135557-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20201109135557-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --enable-default-cni=true --driver=docker : (1m11.03480103s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20201109135316-342799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context calico-20201109135316-342799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-q29kv" [b6ab2283-503e-46da-830a-843797dd7f7f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-q29kv" [b6ab2283-503e-46da-830a-843797dd7f7f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.008370725s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (134.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (110.775436ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (87.130619ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (173.312955ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (144.194078ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (107.125179ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:156: (dbg) Non-zero exit: kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (142.603274ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (95.712404ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (139.430746ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (86.184384ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Non-zero exit: kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (101.916159ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/DNS
net_test.go:156: (dbg) Done: kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- nslookup kubernetes.default: (8.206392928s)
--- PASS: TestNetworkPlugins/group/cilium/DNS (134.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:156: (dbg) Run:  kubectl --context calico-20201109135316-342799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:175: (dbg) Run:  kubectl --context calico-20201109135316-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:188: (dbg) Run:  kubectl --context calico-20201109135316-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20201109135618-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20201109135618-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=kindnet --driver=docker : (1m29.39577004s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20201109135557-342799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context enable-default-cni-20201109135557-342799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-fjm6w" [0065cc44-4817-4da5-870c-026cc4fd250f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:333: "netcat-66fbc655d5-fjm6w" [0065cc44-4817-4da5-870c-026cc4fd250f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 16.00957535s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:156: (dbg) Run:  kubectl --context enable-default-cni-20201109135557-342799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-20201109135557-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20201109135557-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20201109135730-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20201109135730-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=bridge --driver=docker : (1m3.773475684s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:333: "kindnet-p9k9c" [23d2994c-3736-401d-91c1-5d073b32c951] Running
net_test.go:88: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.023880814s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20201109135618-342799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context kindnet-20201109135618-342799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-fgktb" [4c9f63af-2a1f-4293-a3bd-0eb084a38638] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:333: "netcat-66fbc655d5-fgktb" [4c9f63af-2a1f-4293-a3bd-0eb084a38638] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.114356771s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:156: (dbg) Run:  kubectl --context kindnet-20201109135618-342799 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:156: (dbg) Done: kubectl --context kindnet-20201109135618-342799 exec deployment/netcat -- nslookup kubernetes.default: (3.955639121s)
--- PASS: TestNetworkPlugins/group/kindnet/DNS (3.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:175: (dbg) Run:  kubectl --context kindnet-20201109135618-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20201109135618-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:175: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:188: (dbg) Run:  kubectl --context cilium-20201109135250-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (65.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20201109135817-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20201109135817-342799 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --network-plugin=kubenet --driver=docker : (1m5.163885873s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (65.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20201109135730-342799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context bridge-20201109135730-342799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-f46nq" [6b3c8d57-1b87-47e5-bca6-c567f518aac9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:333: "netcat-66fbc655d5-f46nq" [6b3c8d57-1b87-47e5-bca6-c567f518aac9] Running
net_test.go:139: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.910310946s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (7.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:156: (dbg) Run:  kubectl --context bridge-20201109135730-342799 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:156: (dbg) Done: kubectl --context bridge-20201109135730-342799 exec deployment/netcat -- nslookup kubernetes.default: (7.292122984s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (7.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:175: (dbg) Run:  kubectl --context bridge-20201109135730-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:188: (dbg) Run:  kubectl --context bridge-20201109135730-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1160: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1160: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1160: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201109132758-342799 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
fn_tunnel_cmd_test.go:125: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20201109132758-342799 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20201109135817-342799 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context kubenet-20201109135817-342799 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-cnhcd" [a2d32864-0cca-4164-9abf-31fb02256cd0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-cnhcd" [a2d32864-0cca-4164-9abf-31fb02256cd0] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.008895963s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:175: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:188: (dbg) Run:  kubectl --context false-20201109135129-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/HairPin
net_test.go:188: (dbg) Non-zero exit: kubectl --context false-20201109135129-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.261450159s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:156: (dbg) Run:  kubectl --context kubenet-20201109135817-342799 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:175: (dbg) Run:  kubectl --context kubenet-20201109135817-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20201109135817-342799 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.25s)
2020/11/09 13:59:57 [DEBUG] GET http://127.0.0.1:44159/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:656: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:660: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:681: (dbg) Run:  out/minikube-linux-amd64 profile list
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:703: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
fn_tunnel_cmd_test.go:363: (dbg) stopping [out/minikube-linux-amd64 -p functional-20201109132758-342799 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    

Test skip (9/209)

x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:360: Skipping olm test till this timeout issue is solved https://github.com/operator-framework/operator-lifecycle-manager/issues/1534#issuecomment-632342257
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:110: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:182: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:33: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:66: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
--- SKIP: TestNetworkPlugins/group/flannel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
Copied to clipboard