Test Report: Docker_Linux 9933

                    
                      f3be305abb7c609130b6957b2b63ae924113770f
                    
                

Test fail (2/213)

Order failed test Duration
105 TestSkaffold 45.2
132 TestFunctional/parallel/DockerEnv 25.27
x
+
TestSkaffold (45.2s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:53: (dbg) Run:  /tmp/skaffold.exe615250882 version
skaffold_test.go:57: skaffold version: v1.17.2
skaffold_test.go:60: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20201211204522-6575 --memory=2600 --driver=docker 
skaffold_test.go:60: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20201211204522-6575 --memory=2600 --driver=docker : (28.998777817s)
skaffold_test.go:73: copying out/minikube-linux-amd64 to /home/jenkins/workspace/docker_Linux_integration/out/minikube
skaffold_test.go:97: (dbg) Run:  /tmp/skaffold.exe615250882 run --minikube-profile skaffold-20201211204522-6575 --kube-context skaffold-20201211204522-6575 --status-check=true --port-forward=false
skaffold_test.go:97: (dbg) Non-zero exit: /tmp/skaffold.exe615250882 run --minikube-profile skaffold-20201211204522-6575 --kube-context skaffold-20201211204522-6575 --status-check=true --port-forward=false: exit status 1 (9.865332466s)

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Error checking cache.

                                                
                                                
-- /stdout --
** stderr ** 
	failed to build: getting imageID for leeroy-web:latest: The server probably has client authentication (--tlsverify) enabled. Please check your TLS client certification settings: Get "https://192.168.59.176:2376/v1.24/images/leeroy-web:latest/json": remote error: tls: bad certificate

                                                
                                                
** /stderr **
skaffold_test.go:99: error running skaffold: exit status 1

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Error checking cache.

                                                
                                                
-- /stdout --
** stderr ** 
	failed to build: getting imageID for leeroy-web:latest: The server probably has client authentication (--tlsverify) enabled. Please check your TLS client certification settings: Get "https://192.168.59.176:2376/v1.24/images/leeroy-web:latest/json": remote error: tls: bad certificate

                                                
                                                
** /stderr **
panic.go:617: *** TestSkaffold FAILED at 2020-12-11 20:46:02.021156039 +0000 UTC m=+1090.497231383
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect skaffold-20201211204522-6575
helpers_test.go:229: (dbg) docker inspect skaffold-20201211204522-6575:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3f75f60c5cf1bdc8bbdb4b8be889fbbb1544ea4be1935ee581b2c7311b69c51",
	        "Created": "2020-12-11T20:45:24.835895759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 100633,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-12-11T20:45:25.381502386Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:06db6ca724463f987019154e0475424113315da76733d5b67f90e35719d46c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/d3f75f60c5cf1bdc8bbdb4b8be889fbbb1544ea4be1935ee581b2c7311b69c51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3f75f60c5cf1bdc8bbdb4b8be889fbbb1544ea4be1935ee581b2c7311b69c51/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3f75f60c5cf1bdc8bbdb4b8be889fbbb1544ea4be1935ee581b2c7311b69c51/hosts",
	        "LogPath": "/var/lib/docker/containers/d3f75f60c5cf1bdc8bbdb4b8be889fbbb1544ea4be1935ee581b2c7311b69c51/d3f75f60c5cf1bdc8bbdb4b8be889fbbb1544ea4be1935ee581b2c7311b69c51-json.log",
	        "Name": "/skaffold-20201211204522-6575",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "skaffold-20201211204522-6575:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "skaffold-20201211204522-6575",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2726297600,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c3b88d24e06a6dc408276b8a3a7c237801f805d751f777ba0c08ebdc374154e3-init/diff:/var/lib/docker/overlay2/0e454b6080878527938ea7de6c9c9ca3b4e998e7de12dad0ae5d11e5a02b28ed/diff:/var/lib/docker/overlay2/b1080cf3f1c07fe0908008d3edafdd563736ac0728810681091089d69df5a2b3/diff:/var/lib/docker/overlay2/4b9b1225dce44601fa36a73ecf95bd95cdf7d55d704e1b1b9c58ca9a4f4c9db6/diff:/var/lib/docker/overlay2/9afd996885692e28e1b5ceccecab499370a88a6f5aa9af63f99ddcdf47496087/diff:/var/lib/docker/overlay2/e172f94dbfa36e7b7833646884a898590a9a7bb72f03461557b32d46044e2bf3/diff:/var/lib/docker/overlay2/5e940e100bc7df14fbaaca63292506dc23bd933bac16de182b67f8107cfc71b5/diff:/var/lib/docker/overlay2/597b039ba9eb2c747ffeca49b446b61e3121712aac6a1ce013d50a6998c46d93/diff:/var/lib/docker/overlay2/a3668e684110bc4003a16d3fe4a80296a1308a376b08c428b3ae7469edae4b8b/diff:/var/lib/docker/overlay2/617d4351ebaac66d685427e45dfe1cd0a4e0e0ac9dc4ccb7a2f382e1bfc8697d/diff:/var/lib/docker/overlay2/4ac76b
6b6d368a1f1073f4a45b6b80445e04f69010fcc5e524fe8edc2708fd5c/diff:/var/lib/docker/overlay2/155961688c82c43af6d27a734eeeae0fd8eb1766bbb9d2728d834a593622196c/diff:/var/lib/docker/overlay2/8f6b3c33ada50dd91034ae6c41722655db1e7a86bb4b61e1152696c41336aa44/diff:/var/lib/docker/overlay2/39286e41dafe62c271b224fbeaa14b9ca058246bcc76e7a81d75f765a497015e/diff:/var/lib/docker/overlay2/bc0dfc1142718ddc4a235a7a62a371f8d580e48ef41f886bce3bb6598f329ea5/diff:/var/lib/docker/overlay2/285b5808cfef05f77db3330400ad926f089148ece291f130ab7f4c822fa7be5a/diff:/var/lib/docker/overlay2/ac9feea04da985551cdd80f2698f28d116958d31882dc3888245ace574de7021/diff:/var/lib/docker/overlay2/93bc1cd943ffc655fc209b896ce12e6863e5adcbc32cee2830311b118ef17f97/diff:/var/lib/docker/overlay2/e9ca47c898e17aff2b310e13256ac91b8efff61646ca77ebe764817e42c9e278/diff:/var/lib/docker/overlay2/a0c4a393ccf7eb7a3b75e0e421d72749d0641f4e74b689d9b2a1dc9d5f9f2985/diff:/var/lib/docker/overlay2/f3ed5047774399a74e83080908727ed9f42ef86312f5879e724083ee96dc4a98/diff:/var/lib/d
ocker/overlay2/17ad49d1fc1b2eb336aaec3919f67e90045a835b6ad03fa72a6a02f8b2d7a6f9/diff:/var/lib/docker/overlay2/dda0100db23cb2ecb0d502c53e7269e90548d0f6043615cfefea6fd1a42ef67f/diff:/var/lib/docker/overlay2/accfdaeb1d703f222e13547e8fd4c06305c41c6256ac59237b67ac17b702ff5d/diff:/var/lib/docker/overlay2/e4dc6c7d508ce1056ebda7e5bf4239bb48eaa2ad06a4d483e67380212ef84a10/diff:/var/lib/docker/overlay2/d6be635d55a88f802a01d5678467aa3fe46b951550c2d880458b733ff9f56a19/diff:/var/lib/docker/overlay2/d31bed28baf1efe4c8eea0c512e6641cdfa98622cfa1f45f4f463c8e4f0ea9e6/diff:/var/lib/docker/overlay2/4eb064c2961cd60979726e7d7a78d8ac3a96af3b41699c69090b4aec9263e5f7/diff:/var/lib/docker/overlay2/66ec0abca0956048e37f5c5e2125cf299a362b35d473021004316fd83d85d33b/diff:/var/lib/docker/overlay2/5ba45d5dede37c09dccf9193592382ae7516027a675d2631ec64e687b9745c00/diff:/var/lib/docker/overlay2/1ceade4823b29f813f08c3db2bd4d966999ac776084d4b7b054d7220b5689943/diff:/var/lib/docker/overlay2/0e3148261963465326228e3e9b1f52398a9551f01263a9b78bebaa06184
ed2af/diff:/var/lib/docker/overlay2/4d90ca5a45e75e28de2e79390ac1c0075f60b1bbd9446a169ec9c45ca0702256/diff:/var/lib/docker/overlay2/b317057fb455e15ebe8bf80b7713f8ad35aff0405e06e48a935b1965b47214e7/diff:/var/lib/docker/overlay2/ed65d2c3c21872669bade9290359857509ebcf1d7427db303d876c8efdcda07b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c3b88d24e06a6dc408276b8a3a7c237801f805d751f777ba0c08ebdc374154e3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c3b88d24e06a6dc408276b8a3a7c237801f805d751f777ba0c08ebdc374154e3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c3b88d24e06a6dc408276b8a3a7c237801f805d751f777ba0c08ebdc374154e3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "skaffold-20201211204522-6575",
	                "Source": "/var/lib/docker/volumes/skaffold-20201211204522-6575/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-20201211204522-6575",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-20201211204522-6575",
	                "name.minikube.sigs.k8s.io": "skaffold-20201211204522-6575",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "31f499dae66e59ec12b3ba07e7737045ed19453886638274e0fe55de629caf0c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32827"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32826"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32825"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32824"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/31f499dae66e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-20201211204522-6575": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.176"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d3f75f60c5cf"
	                    ],
	                    "NetworkID": "6869d2c96cc4881e8e6a12d4e175341c012ec06ec5e778cdde15b9321969fe9b",
	                    "EndpointID": "4c9de1aee08cecbad865066e3afea4b30706941d2a2c1a6597a134e771e06a2f",
	                    "Gateway": "192.168.59.1",
	                    "IPAddress": "192.168.59.176",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3b:b0",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p skaffold-20201211204522-6575 -n skaffold-20201211204522-6575
helpers_test.go:238: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p skaffold-20201211204522-6575 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p skaffold-20201211204522-6575 logs -n 25: (1.944589651s)
helpers_test.go:246: TestSkaffold logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Fri 2020-12-11 20:45:25 UTC, end at Fri 2020-12-11 20:46:03 UTC. --
	* Dec 11 20:45:53 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:53.911382240Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Dec 11 20:45:53 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:53.911400348Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Dec 11 20:45:53 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:53.912524513Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Dec 11 20:45:53 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:53.912551598Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Dec 11 20:45:53 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:53.912576132Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Dec 11 20:45:53 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:53.912594113Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Dec 11 20:45:53 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:53.931632739Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Dec 11 20:45:53 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:53.940289407Z" level=warning msg="Your kernel does not support swap memory limit"
	* Dec 11 20:45:53 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:53.940316550Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	* Dec 11 20:45:53 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:53.940496433Z" level=info msg="Loading containers: start."
	* Dec 11 20:45:54 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:54.150830259Z" level=info msg="Removing stale sandbox 4f33f23a6626dec18e5579332b0fec747db1940dc946b3b8848e979c1dc71260 (124ac1733d1bf863ed3310cf60535bf6ad6b0fec4417f1e5912f6da3c5779eaa)"
	* Dec 11 20:45:54 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:54.153599433Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 3792677e091ee505bda7597ba3046009aafc9ec03ac914259bfd835a13217855 cbf5752a699308c35e079f033f67743ddd6cdfd239f49aca92d3f91584ce3081], retrying...."
	* Dec 11 20:45:54 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:54.205879117Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Dec 11 20:45:54 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:54.249708982Z" level=info msg="Loading containers: done."
	* Dec 11 20:45:54 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:54.279826859Z" level=info msg="Docker daemon" commit=eeddea2 graphdriver(s)=overlay2 version=20.10.0
	* Dec 11 20:45:54 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:54.279903929Z" level=info msg="Daemon has completed initialization"
	* Dec 11 20:45:54 skaffold-20201211204522-6575 systemd[1]: Started Docker Application Container Engine.
	* Dec 11 20:45:54 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:54.298369931Z" level=info msg="API listen on [::]:2376"
	* Dec 11 20:45:54 skaffold-20201211204522-6575 dockerd[2914]: time="2020-12-11T20:45:54.302868821Z" level=info msg="API listen on /var/run/docker.sock"
	* Dec 11 20:45:59 skaffold-20201211204522-6575 dockerd[2914]: http: TLS handshake error from 192.168.59.1:36520: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* Dec 11 20:45:59 skaffold-20201211204522-6575 dockerd[2914]: http: TLS handshake error from 192.168.59.1:36522: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* Dec 11 20:46:00 skaffold-20201211204522-6575 dockerd[2914]: http: TLS handshake error from 192.168.59.1:36524: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* Dec 11 20:46:01 skaffold-20201211204522-6575 dockerd[2914]: http: TLS handshake error from 192.168.59.1:36598: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* Dec 11 20:46:02 skaffold-20201211204522-6575 dockerd[2914]: http: TLS handshake error from 192.168.59.1:36608: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* Dec 11 20:46:02 skaffold-20201211204522-6575 dockerd[2914]: http: TLS handshake error from 192.168.59.1:36606: tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "jenkins")
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* ce983f1aa07f7       0369cf4303ffd       8 seconds ago       Running             etcd                      1                   93aba77f77f5c
	* 78eae12b32cbf       3138b6e3d4712       8 seconds ago       Running             kube-scheduler            1                   562bb8602b367
	* eab3db0200bb0       ca9843d3b5454       8 seconds ago       Running             kube-apiserver            1                   1b5e53f6b0da4
	* 72de7737a0715       b9fa1895dcaa6       8 seconds ago       Running             kube-controller-manager   1                   aac72db6481cc
	* 7b6ef6577ba73       ca9843d3b5454       23 seconds ago      Exited              kube-apiserver            0                   7ac653ada5bc8
	* 76fa734100cb1       b9fa1895dcaa6       23 seconds ago      Exited              kube-controller-manager   0                   c441e30e9faf5
	* 4dace7a8f06f6       3138b6e3d4712       23 seconds ago      Exited              kube-scheduler            0                   3b8163b702b61
	* 34ff8741eb9e5       0369cf4303ffd       23 seconds ago      Exited              etcd                      0                   a023ef9b93da9
	* 
	* ==> describe nodes <==
	* Name:               skaffold-20201211204522-6575
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=skaffold-20201211204522-6575
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=fc69cfe93e0c46b6d41ab5653129ddf7843209ed
	*                     minikube.k8s.io/name=skaffold-20201211204522-6575
	*                     minikube.k8s.io/updated_at=2020_12_11T20_45_49_0700
	*                     minikube.k8s.io/version=v1.15.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Fri, 11 Dec 2020 20:45:46 +0000
	* Taints:             node.kubernetes.io/not-ready:NoSchedule
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  skaffold-20201211204522-6575
	*   AcquireTime:     <unset>
	*   RenewTime:       Fri, 11 Dec 2020 20:46:00 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Fri, 11 Dec 2020 20:46:01 +0000   Fri, 11 Dec 2020 20:45:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Fri, 11 Dec 2020 20:46:01 +0000   Fri, 11 Dec 2020 20:45:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Fri, 11 Dec 2020 20:46:01 +0000   Fri, 11 Dec 2020 20:45:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Fri, 11 Dec 2020 20:46:01 +0000   Fri, 11 Dec 2020 20:46:01 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.59.176
	*   Hostname:    skaffold-20201211204522-6575
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 ee28759ded1d4df1ae60839826a47b5c
	*   System UUID:                ccff03d2-3662-4261-b32e-b8f24caf6254
	*   Boot ID:                    ff2e882c-ceac-4ec5-a892-a979e1bf648a
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.0
	*   Kubelet Version:            v1.20.0
	*   Kube-Proxy Version:         v1.20.0
	* Non-terminated Pods:          (4 in total)
	*   Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	*   kube-system                 etcd-skaffold-20201211204522-6575                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12s
	*   kube-system                 kube-apiserver-skaffold-20201211204522-6575             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12s
	*   kube-system                 kube-controller-manager-skaffold-20201211204522-6575    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12s
	*   kube-system                 kube-scheduler-skaffold-20201211204522-6575             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                650m (8%)   0 (0%)
	*   memory             100Mi (0%)  0 (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                From     Message
	*   ----    ------                   ----               ----     -------
	*   Normal  NodeHasSufficientMemory  24s (x5 over 24s)  kubelet  Node skaffold-20201211204522-6575 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    24s (x4 over 24s)  kubelet  Node skaffold-20201211204522-6575 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     24s (x4 over 24s)  kubelet  Node skaffold-20201211204522-6575 status is now: NodeHasSufficientPID
	*   Normal  Starting                 13s                kubelet  Starting kubelet.
	*   Normal  NodeHasSufficientMemory  13s                kubelet  Node skaffold-20201211204522-6575 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    13s                kubelet  Node skaffold-20201211204522-6575 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     13s                kubelet  Node skaffold-20201211204522-6575 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             12s                kubelet  Node skaffold-20201211204522-6575 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  12s                kubelet  Updated Node Allocatable limit across pods
	* 
	* ==> dmesg <==
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff e2 00 6b 4f 61 f3 08 06        ........kOa...
	* [  +1.532305] IPv4: martian source 10.85.0.6 from 10.85.0.6, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 1e ec 44 bb 6e 82 08 06        ........D.n...
	* [  +1.660050] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff de db 52 e5 c3 20 08 06        ........R.. ..
	* [  +0.870917] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff b2 69 2e ed 78 05 08 06        .......i..x...
	* [  +1.300172] IPv4: martian source 10.85.0.9 from 10.85.0.9, on dev eth0
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff d2 67 cb cb 00 84 08 06        .......g......
	* [  +1.032988] IPv4: martian source 10.85.0.10 from 10.85.0.10, on dev eth0
	* [  +0.000002] ll header: 00000000: ff ff ff ff ff ff 0a 47 ae fa 47 c2 08 06        .......G..G...
	* [  +1.018971] IPv4: martian source 10.85.0.11 from 10.85.0.11, on dev eth0
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff d6 bf 4e 88 68 91 08 06        ........N.h...
	* [  +1.026702] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethd6ee52ab
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8a 0c 6e 94 e2 33 08 06        ........n..3..
	* [  +5.124598] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +17.388333] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:38] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +16.296127] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth6d3b9e3d
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa eb fa d7 11 74 08 06        ...........t..
	* [  +3.224752] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:40] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:44] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:45] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [34ff8741eb9e] <==
	* raft2020/12/11 20:45:40 INFO: 3be816cd21eae4fe switched to configuration voters=(4316725313127769342)
	* 2020-12-11 20:45:40.940023 W | auth: simple token is not cryptographically signed
	* 2020-12-11 20:45:40.945200 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* 2020-12-11 20:45:40.945443 I | etcdserver: 3be816cd21eae4fe as single-node; fast-forwarding 9 ticks (election ticks 10)
	* raft2020/12/11 20:45:40 INFO: 3be816cd21eae4fe switched to configuration voters=(4316725313127769342)
	* 2020-12-11 20:45:40.945821 I | etcdserver/membership: added member 3be816cd21eae4fe [https://192.168.59.176:2380] to cluster ce580c4975538a9c
	* 2020-12-11 20:45:40.947625 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-12-11 20:45:40.947752 I | embed: listening for peers on 192.168.59.176:2380
	* 2020-12-11 20:45:40.947844 I | embed: listening for metrics on http://127.0.0.1:2381
	* raft2020/12/11 20:45:41 INFO: 3be816cd21eae4fe is starting a new election at term 1
	* raft2020/12/11 20:45:41 INFO: 3be816cd21eae4fe became candidate at term 2
	* raft2020/12/11 20:45:41 INFO: 3be816cd21eae4fe received MsgVoteResp from 3be816cd21eae4fe at term 2
	* raft2020/12/11 20:45:41 INFO: 3be816cd21eae4fe became leader at term 2
	* raft2020/12/11 20:45:41 INFO: raft.node: 3be816cd21eae4fe elected leader 3be816cd21eae4fe at term 2
	* 2020-12-11 20:45:41.337704 I | etcdserver: setting up the initial cluster version to 3.4
	* 2020-12-11 20:45:41.338875 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-12-11 20:45:41.338947 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-12-11 20:45:41.339005 I | etcdserver: published {Name:skaffold-20201211204522-6575 ClientURLs:[https://192.168.59.176:2379]} to cluster ce580c4975538a9c
	* 2020-12-11 20:45:41.339023 I | embed: ready to serve client requests
	* 2020-12-11 20:45:41.339254 I | embed: ready to serve client requests
	* 2020-12-11 20:45:41.340888 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-12-11 20:45:41.379774 I | embed: serving client requests on 192.168.59.176:2379
	* 2020-12-11 20:45:52.805573 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/12/11 20:45:52 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 2020-12-11 20:45:52.822089 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> etcd [ce983f1aa07f] <==
	* 2020-12-11 20:45:59.333677 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (960.066041ms) to execute
	* 2020-12-11 20:45:59.333988 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:10000 " with result "range_response_count:12 size:9121" took too long (1.239943847s) to execute
	* 2020-12-11 20:45:59.334162 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.240001547s) to execute
	* 2020-12-11 20:45:59.334368 W | etcdserver: read-only range request "key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 " with result "range_response_count:12 size:7107" took too long (1.248947403s) to execute
	* 2020-12-11 20:45:59.334403 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" limit:10000 " with result "range_response_count:0 size:5" took too long (1.256953672s) to execute
	* 2020-12-11 20:45:59.334464 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.257030741s) to execute
	* 2020-12-11 20:45:59.334641 W | etcdserver: read-only range request "key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.144529008s) to execute
	* 2020-12-11 20:45:59.334900 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.265659898s) to execute
	* 2020-12-11 20:45:59.335128 W | etcdserver: read-only range request "key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:10000 " with result "range_response_count:1 size:992" took too long (1.144677757s) to execute
	* 2020-12-11 20:45:59.335400 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 " with result "range_response_count:0 size:5" took too long (1.265814552s) to execute
	* 2020-12-11 20:45:59.336140 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 " with result "range_response_count:2 size:910" took too long (1.152946987s) to execute
	* 2020-12-11 20:45:59.336522 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (1.274084154s) to execute
	* 2020-12-11 20:45:59.336817 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.153072296s) to execute
	* 2020-12-11 20:45:59.337137 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.274191871s) to execute
	* 2020-12-11 20:45:59.338211 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 " with result "range_response_count:2 size:910" took too long (1.166226362s) to execute
	* 2020-12-11 20:45:59.338550 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.2838444s) to execute
	* 2020-12-11 20:45:59.338734 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.166393761s) to execute
	* 2020-12-11 20:45:59.338819 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 " with result "range_response_count:0 size:5" took too long (1.283966376s) to execute
	* 2020-12-11 20:45:59.339685 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:10000 " with result "range_response_count:49 size:36604" took too long (1.175882312s) to execute
	* 2020-12-11 20:45:59.339742 W | etcdserver: read-only range request "key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.294699726s) to execute
	* 2020-12-11 20:45:59.339794 W | etcdserver: read-only range request "key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.302709061s) to execute
	* 2020-12-11 20:46:01.969312 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:422" took too long (980.112115ms) to execute
	* 2020-12-11 20:46:01.978124 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/kube-system/\" range_end:\"/registry/resourcequotas/kube-system0\" " with result "range_response_count:0 size:5" took too long (777.960766ms) to execute
	* 2020-12-11 20:46:01.978502 W | etcdserver: read-only range request "key:\"/registry/prioritylevelconfigurations/exempt\" " with result "range_response_count:1 size:371" took too long (983.200804ms) to execute
	* 2020-12-11 20:46:01.978680 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-skaffold-20201211204522-6575\" " with result "range_response_count:1 size:6179" took too long (985.206807ms) to execute
	* 
	* ==> kernel <==
	*  20:46:03 up 28 min,  0 users,  load average: 1.53, 1.86, 1.63
	* Linux skaffold-20201211204522-6575 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [7b6ef6577ba7] <==
	* W1211 20:45:52.820034       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1211 20:45:52.820054       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1211 20:45:52.820077       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820078       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820112       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.819837       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820176       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1211 20:45:52.820205       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* I1211 20:45:52.820243       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1211 20:45:52.820340       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1211 20:45:52.820351       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1211 20:45:52.820385       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820447       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820480       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1211 20:45:52.820490       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1211 20:45:52.820535       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820597       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820631       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820686       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820704       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820813       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820828       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:45:52.820844       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1211 20:45:52.879396       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1211 20:45:52.879637       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kube-apiserver [eab3db0200bb] <==
	* Trace[814555369]: ---"Transaction committed" 779ms (20:46:00.970)
	* Trace[814555369]: [782.260051ms] [782.260051ms] END
	* I1211 20:46:01.971704       1 trace.go:205] Trace[1215939447]: "Patch" url:/api/v1/nodes/skaffold-20201211204522-6575/status,user-agent:kubelet/v1.20.0 (linux/amd64) kubernetes/af46c47,client:192.168.59.176 (11-Dec-2020 20:46:01.188) (total time: 783ms):
	* Trace[1215939447]: ---"Object stored in database" 779ms (20:46:00.971)
	* Trace[1215939447]: [783.061896ms] [783.061896ms] END
	* I1211 20:46:01.980818       1 trace.go:205] Trace[975985850]: "List etcd3" key:/resourcequotas/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: (11-Dec-2020 20:46:01.199) (total time: 781ms):
	* Trace[975985850]: [781.042774ms] [781.042774ms] END
	* I1211 20:46:01.980835       1 trace.go:205] Trace[1046865108]: "Get" url:/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations/exempt,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/af46c47,client:127.0.0.1 (11-Dec-2020 20:46:00.994) (total time: 985ms):
	* Trace[1046865108]: ---"About to write a response" 985ms (20:46:00.980)
	* Trace[1046865108]: [985.847837ms] [985.847837ms] END
	* I1211 20:46:01.980934       1 trace.go:205] Trace[2038522236]: "List" url:/api/v1/namespaces/kube-system/resourcequotas,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/af46c47,client:127.0.0.1 (11-Dec-2020 20:46:01.199) (total time: 781ms):
	* Trace[2038522236]: ---"Listing from storage done" 781ms (20:46:00.980)
	* Trace[2038522236]: [781.201044ms] [781.201044ms] END
	* I1211 20:46:01.981577       1 trace.go:205] Trace[298957945]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.20.0 (linux/amd64) kubernetes/af46c47,client:192.168.59.176 (11-Dec-2020 20:46:00.990) (total time: 990ms):
	* Trace[298957945]: ---"Object stored in database" 990ms (20:46:00.981)
	* Trace[298957945]: [990.812752ms] [990.812752ms] END
	* E1211 20:46:01.984504       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	* I1211 20:46:01.985187       1 trace.go:205] Trace[257055649]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-skaffold-20201211204522-6575,user-agent:kubelet/v1.20.0 (linux/amd64) kubernetes/af46c47,client:192.168.59.176 (11-Dec-2020 20:46:00.991) (total time: 993ms):
	* Trace[257055649]: ---"About to write a response" 992ms (20:46:00.984)
	* Trace[257055649]: [993.22922ms] [993.22922ms] END
	* I1211 20:46:01.985498       1 trace.go:205] Trace[275923415]: "Create" url:/apis/events.k8s.io/v1/namespaces/kube-system/events,user-agent:kube-scheduler/v1.20.0 (linux/amd64) kubernetes/af46c47/scheduler,client:192.168.59.176 (11-Dec-2020 20:46:01.198) (total time: 787ms):
	* Trace[275923415]: ---"Object stored in database" 787ms (20:46:00.985)
	* Trace[275923415]: [787.187394ms] [787.187394ms] END
	* I1211 20:46:01.987340       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	* I1211 20:46:03.549595       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* 
	* ==> kube-controller-manager [72de7737a071] <==
	* I1211 20:46:03.544951       1 serviceaccounts_controller.go:117] Starting service account controller
	* I1211 20:46:03.544971       1 shared_informer.go:240] Waiting for caches to sync for service account
	* I1211 20:46:03.637604       1 shared_informer.go:247] Caches are synced for tokens 
	* I1211 20:46:03.650034       1 controllermanager.go:554] Started "statefulset"
	* I1211 20:46:03.650075       1 stateful_set.go:146] Starting stateful set controller
	* I1211 20:46:03.650092       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	* I1211 20:46:03.655574       1 node_lifecycle_controller.go:380] Sending events to api server.
	* I1211 20:46:03.655804       1 taint_manager.go:163] Sending events to api server.
	* I1211 20:46:03.655877       1 node_lifecycle_controller.go:508] Controller will reconcile labels.
	* I1211 20:46:03.655927       1 controllermanager.go:554] Started "nodelifecycle"
	* I1211 20:46:03.656071       1 node_lifecycle_controller.go:542] Starting node controller
	* I1211 20:46:03.656091       1 shared_informer.go:240] Waiting for caches to sync for taint
	* I1211 20:46:03.661697       1 controllermanager.go:554] Started "root-ca-cert-publisher"
	* I1211 20:46:03.661802       1 publisher.go:98] Starting root CA certificate configmap publisher
	* I1211 20:46:03.661817       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	* I1211 20:46:03.676951       1 controllermanager.go:554] Started "bootstrapsigner"
	* W1211 20:46:03.676976       1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	* W1211 20:46:03.676984       1 controllermanager.go:546] Skipping "route"
	* I1211 20:46:03.677161       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	* I1211 20:46:03.698375       1 controllermanager.go:554] Started "persistentvolume-expander"
	* I1211 20:46:03.698553       1 expand_controller.go:310] Starting expand controller
	* I1211 20:46:03.698574       1 shared_informer.go:240] Waiting for caches to sync for expand
	* I1211 20:46:03.715964       1 controllermanager.go:554] Started "podgc"
	* I1211 20:46:03.716079       1 gc_controller.go:89] Starting GC controller
	* I1211 20:46:03.716124       1 shared_informer.go:240] Waiting for caches to sync for GC
	* 
	* ==> kube-controller-manager [76fa734100cb] <==
	* I1211 20:45:50.883906       1 graph_builder.go:289] GraphBuilder running
	* I1211 20:45:51.089772       1 controllermanager.go:554] Started "daemonset"
	* I1211 20:45:51.089811       1 daemon_controller.go:285] Starting daemon sets controller
	* I1211 20:45:51.089821       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	* I1211 20:45:51.382607       1 controllermanager.go:554] Started "replicaset"
	* I1211 20:45:51.382681       1 replica_set.go:182] Starting replicaset controller
	* I1211 20:45:51.382690       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	* I1211 20:45:51.589868       1 controllermanager.go:554] Started "clusterrole-aggregation"
	* I1211 20:45:51.589940       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
	* I1211 20:45:51.589957       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
	* I1211 20:45:51.840058       1 controllermanager.go:554] Started "serviceaccount"
	* I1211 20:45:51.840148       1 serviceaccounts_controller.go:117] Starting service account controller
	* I1211 20:45:51.840208       1 shared_informer.go:240] Waiting for caches to sync for service account
	* I1211 20:45:51.989339       1 controllermanager.go:554] Started "csrapproving"
	* I1211 20:45:51.989394       1 certificate_controller.go:118] Starting certificate controller "csrapproving"
	* I1211 20:45:51.989408       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
	* I1211 20:45:52.240373       1 controllermanager.go:554] Started "root-ca-cert-publisher"
	* I1211 20:45:52.240454       1 publisher.go:98] Starting root CA certificate configmap publisher
	* I1211 20:45:52.240463       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	* I1211 20:45:52.489796       1 controllermanager.go:554] Started "endpoint"
	* I1211 20:45:52.489873       1 endpoints_controller.go:184] Starting endpoint controller
	* I1211 20:45:52.489882       1 shared_informer.go:240] Waiting for caches to sync for endpoint
	* I1211 20:45:52.747110       1 controllermanager.go:554] Started "namespace"
	* I1211 20:45:52.747180       1 namespace_controller.go:200] Starting namespace controller
	* I1211 20:45:52.747188       1 shared_informer.go:240] Waiting for caches to sync for namespace
	* 
	* ==> kube-scheduler [4dace7a8f06f] <==
	* W1211 20:45:46.786830       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1211 20:45:46.786866       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W1211 20:45:46.786892       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1211 20:45:46.786904       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1211 20:45:46.982431       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1211 20:45:46.982463       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1211 20:45:46.983112       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1211 20:45:46.983240       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* E1211 20:45:46.988657       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1211 20:45:46.989014       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1211 20:45:46.989125       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1211 20:45:46.989242       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1211 20:45:46.989355       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1211 20:45:46.989364       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1211 20:45:46.989477       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1211 20:45:46.989512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1211 20:45:46.990052       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1211 20:45:46.990066       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1211 20:45:46.990221       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1211 20:45:46.990294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1211 20:45:47.992828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1211 20:45:48.004293       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1211 20:45:48.019964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1211 20:45:48.139838       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* I1211 20:45:49.982618       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kube-scheduler [78eae12b32cb] <==
	* I1211 20:45:56.417115       1 serving.go:331] Generated self-signed cert in-memory
	* W1211 20:46:01.003746       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1211 20:46:01.003779       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W1211 20:46:01.003790       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1211 20:46:01.003798       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1211 20:46:01.095479       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1211 20:46:01.095516       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1211 20:46:01.096574       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1211 20:46:01.097688       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1211 20:46:01.195777       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2020-12-11 20:45:25 UTC, end at Fri 2020-12-11 20:46:04 UTC. --
	* Dec 11 20:45:51 skaffold-20201211204522-6575 kubelet[2359]: I1211 20:45:51.386710    2359 reconciler.go:157] Reconciler: start to sync state
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.032833    2359 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-skaffold-20201211204522-6575.164fc46a58f1e00e", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-skaffold-20201211204522-6575", UID:"30fb9afba4c39ffe9c14831adf8aec3e", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Sta
rtup probe failed: Get \"https://192.168.59.176:8443/livez\": dial tcp 192.168.59.176:8443: connect: connection refused", Source:v1.EventSource{Component:"kubelet", Host:"skaffold-20201211204522-6575"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfed148041ee160e, ext:3351592241, loc:(*time.Location)(0x70c7020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfed148041ee160e, ext:3351592241, loc:(*time.Location)(0x70c7020)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.59.176:8443: connect: connection refused'(may retry after sleeping)
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: W1211 20:45:53.697931    2359 pod_container_deletor.go:79] Container "7ac653ada5bc8ead21baa69bc495bb48513c140d37673cf0a7e92d816189be4e" not found in pod's containers
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: W1211 20:45:53.703162    2359 pod_container_deletor.go:79] Container "a023ef9b93da9de000ced7ee32921727e7de2682baeb8e6c6c3ad72c609a8cc6" not found in pod's containers
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: W1211 20:45:53.703777    2359 status_manager.go:550] Failed to get status for pod "etcd-skaffold-20201211204522-6575_kube-system(4e3082b62e5f1d8c312fdb29b13562b0)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-skaffold-20201211204522-6575": dial tcp 192.168.59.176:8443: connect: connection refused
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.706137    2359 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-skaffold-20201211204522-6575": Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.706202    2359 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "etcd-skaffold-20201211204522-6575_kube-system(4e3082b62e5f1d8c312fdb29b13562b0)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-skaffold-20201211204522-6575": Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.706226    2359 kuberuntime_manager.go:755] createPodSandbox for pod "etcd-skaffold-20201211204522-6575_kube-system(4e3082b62e5f1d8c312fdb29b13562b0)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-skaffold-20201211204522-6575": Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.706281    2359 pod_workers.go:191] Error syncing pod 4e3082b62e5f1d8c312fdb29b13562b0 ("etcd-skaffold-20201211204522-6575_kube-system(4e3082b62e5f1d8c312fdb29b13562b0)"), skipping: failed to "CreatePodSandbox" for "etcd-skaffold-20201211204522-6575_kube-system(4e3082b62e5f1d8c312fdb29b13562b0)" with CreatePodSandboxError: "CreatePodSandbox for pod \"etcd-skaffold-20201211204522-6575_kube-system(4e3082b62e5f1d8c312fdb29b13562b0)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"etcd-skaffold-20201211204522-6575\": Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: W1211 20:45:53.708213    2359 pod_container_deletor.go:79] Container "3b8163b702b6173346f2352ed440b8496cd5512588c1a6bffb8e60cb2908fcdd" not found in pod's containers
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: W1211 20:45:53.709065    2359 status_manager.go:550] Failed to get status for pod "kube-scheduler-skaffold-20201211204522-6575_kube-system(3478da2c440ba32fb6c087b3f3b99813)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-skaffold-20201211204522-6575": dial tcp 192.168.59.176:8443: connect: connection refused
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.709566    2359 kuberuntime_manager.go:965] PodSandboxStatus of sandbox "c441e30e9faf50ed575cfbbe496450d76ef94fe54f5502678139460aa60751e2" for pod "kube-controller-manager-skaffold-20201211204522-6575_kube-system(a3e7be694ef7cf952503c5d331abc0ac)" error: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.847550    2359 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-apiserver-skaffold-20201211204522-6575": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create?name=k8s_POD_kube-apiserver-skaffold-20201211204522-6575_kube-system_30fb9afba4c39ffe9c14831adf8aec3e_1": EOF
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.847610    2359 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "kube-apiserver-skaffold-20201211204522-6575_kube-system(30fb9afba4c39ffe9c14831adf8aec3e)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-apiserver-skaffold-20201211204522-6575": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create?name=k8s_POD_kube-apiserver-skaffold-20201211204522-6575_kube-system_30fb9afba4c39ffe9c14831adf8aec3e_1": EOF
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.847627    2359 kuberuntime_manager.go:755] createPodSandbox for pod "kube-apiserver-skaffold-20201211204522-6575_kube-system(30fb9afba4c39ffe9c14831adf8aec3e)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-apiserver-skaffold-20201211204522-6575": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create?name=k8s_POD_kube-apiserver-skaffold-20201211204522-6575_kube-system_30fb9afba4c39ffe9c14831adf8aec3e_1": EOF
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.847546    2359 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-scheduler-skaffold-20201211204522-6575": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/124ac1733d1bf863ed3310cf60535bf6ad6b0fec4417f1e5912f6da3c5779eaa/start": EOF
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.847683    2359 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "kube-scheduler-skaffold-20201211204522-6575_kube-system(3478da2c440ba32fb6c087b3f3b99813)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-scheduler-skaffold-20201211204522-6575": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/124ac1733d1bf863ed3310cf60535bf6ad6b0fec4417f1e5912f6da3c5779eaa/start": EOF
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.847689    2359 pod_workers.go:191] Error syncing pod 30fb9afba4c39ffe9c14831adf8aec3e ("kube-apiserver-skaffold-20201211204522-6575_kube-system(30fb9afba4c39ffe9c14831adf8aec3e)"), skipping: failed to "CreatePodSandbox" for "kube-apiserver-skaffold-20201211204522-6575_kube-system(30fb9afba4c39ffe9c14831adf8aec3e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-apiserver-skaffold-20201211204522-6575_kube-system(30fb9afba4c39ffe9c14831adf8aec3e)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-apiserver-skaffold-20201211204522-6575\": error during connect: Post \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create?name=k8s_POD_kube-apiserver-skaffold-20201211204522-6575_kube-system_30fb9afba4c39ffe9c14831adf8aec3e_1\": EOF"
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.847704    2359 kuberuntime_manager.go:755] createPodSandbox for pod "kube-scheduler-skaffold-20201211204522-6575_kube-system(3478da2c440ba32fb6c087b3f3b99813)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-scheduler-skaffold-20201211204522-6575": error during connect: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/124ac1733d1bf863ed3310cf60535bf6ad6b0fec4417f1e5912f6da3c5779eaa/start": EOF
	* Dec 11 20:45:53 skaffold-20201211204522-6575 kubelet[2359]: E1211 20:45:53.847749    2359 pod_workers.go:191] Error syncing pod 3478da2c440ba32fb6c087b3f3b99813 ("kube-scheduler-skaffold-20201211204522-6575_kube-system(3478da2c440ba32fb6c087b3f3b99813)"), skipping: failed to "CreatePodSandbox" for "kube-scheduler-skaffold-20201211204522-6575_kube-system(3478da2c440ba32fb6c087b3f3b99813)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-scheduler-skaffold-20201211204522-6575_kube-system(3478da2c440ba32fb6c087b3f3b99813)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-scheduler-skaffold-20201211204522-6575\": error during connect: Post \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/124ac1733d1bf863ed3310cf60535bf6ad6b0fec4417f1e5912f6da3c5779eaa/start\": EOF"
	* Dec 11 20:45:54 skaffold-20201211204522-6575 kubelet[2359]: W1211 20:45:54.719870    2359 pod_container_deletor.go:79] Container "124ac1733d1bf863ed3310cf60535bf6ad6b0fec4417f1e5912f6da3c5779eaa" not found in pod's containers
	* Dec 11 20:45:54 skaffold-20201211204522-6575 kubelet[2359]: W1211 20:45:54.724877    2359 pod_container_deletor.go:79] Container "c441e30e9faf50ed575cfbbe496450d76ef94fe54f5502678139460aa60751e2" not found in pod's containers
	* Dec 11 20:45:54 skaffold-20201211204522-6575 kubelet[2359]: W1211 20:45:54.725447    2359 status_manager.go:550] Failed to get status for pod "kube-controller-manager-skaffold-20201211204522-6575_kube-system(a3e7be694ef7cf952503c5d331abc0ac)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-skaffold-20201211204522-6575": dial tcp 192.168.59.176:8443: connect: connection refused
	* Dec 11 20:45:54 skaffold-20201211204522-6575 kubelet[2359]: W1211 20:45:54.730678    2359 status_manager.go:550] Failed to get status for pod "kube-apiserver-skaffold-20201211204522-6575_kube-system(30fb9afba4c39ffe9c14831adf8aec3e)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-skaffold-20201211204522-6575": dial tcp 192.168.59.176:8443: connect: connection refused
	* Dec 11 20:45:55 skaffold-20201211204522-6575 kubelet[2359]: W1211 20:45:55.791474    2359 status_manager.go:550] Failed to get status for pod "kube-controller-manager-skaffold-20201211204522-6575_kube-system(a3e7be694ef7cf952503c5d331abc0ac)": Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-skaffold-20201211204522-6575": dial tcp 192.168.59.176:8443: connect: connection refused

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p skaffold-20201211204522-6575 -n skaffold-20201211204522-6575
helpers_test.go:255: (dbg) Run:  kubectl --context skaffold-20201211204522-6575 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: storage-provisioner
helpers_test.go:263: ======> post-mortem[TestSkaffold]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context skaffold-20201211204522-6575 describe pod storage-provisioner
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context skaffold-20201211204522-6575 describe pod storage-provisioner: exit status 1 (80.362355ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context skaffold-20201211204522-6575 describe pod storage-provisioner: exit status 1
helpers_test.go:171: Cleaning up "skaffold-20201211204522-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20201211204522-6575
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20201211204522-6575: (2.764831245s)
--- FAIL: TestSkaffold (45.20s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (25.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:177: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20201211203409-6575 docker-env) && out/minikube-linux-amd64 status -p functional-20201211203409-6575"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:177: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20201211203409-6575 docker-env) && out/minikube-linux-amd64 status -p functional-20201211203409-6575": exit status 2 (13.751898618s)

                                                
                                                
-- stdout --
	functional-20201211203409-6575
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	timeToStop: Nonexistent
	

                                                
                                                
-- /stdout --
functional_test.go:183: failed to do status after eval-ing docker-env. error: exit status 2
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DockerEnv]: docker inspect <======
helpers_test.go:225: (dbg) Run:  docker inspect functional-20201211203409-6575
helpers_test.go:229: (dbg) docker inspect functional-20201211203409-6575:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "58bc21c69ea05ab0989a71870d5669460b09aa939b6881a4b2bdb079281b963c",
	        "Created": "2020-12-11T20:34:11.449981949Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40154,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-12-11T20:34:11.974843973Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:06db6ca724463f987019154e0475424113315da76733d5b67f90e35719d46c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/58bc21c69ea05ab0989a71870d5669460b09aa939b6881a4b2bdb079281b963c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/58bc21c69ea05ab0989a71870d5669460b09aa939b6881a4b2bdb079281b963c/hostname",
	        "HostsPath": "/var/lib/docker/containers/58bc21c69ea05ab0989a71870d5669460b09aa939b6881a4b2bdb079281b963c/hosts",
	        "LogPath": "/var/lib/docker/containers/58bc21c69ea05ab0989a71870d5669460b09aa939b6881a4b2bdb079281b963c/58bc21c69ea05ab0989a71870d5669460b09aa939b6881a4b2bdb079281b963c-json.log",
	        "Name": "/functional-20201211203409-6575",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20201211203409-6575:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20201211203409-6575",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0d4b4499ac47d542347db71c43f211482af93655a809475206f01a44342ce697-init/diff:/var/lib/docker/overlay2/0e454b6080878527938ea7de6c9c9ca3b4e998e7de12dad0ae5d11e5a02b28ed/diff:/var/lib/docker/overlay2/b1080cf3f1c07fe0908008d3edafdd563736ac0728810681091089d69df5a2b3/diff:/var/lib/docker/overlay2/4b9b1225dce44601fa36a73ecf95bd95cdf7d55d704e1b1b9c58ca9a4f4c9db6/diff:/var/lib/docker/overlay2/9afd996885692e28e1b5ceccecab499370a88a6f5aa9af63f99ddcdf47496087/diff:/var/lib/docker/overlay2/e172f94dbfa36e7b7833646884a898590a9a7bb72f03461557b32d46044e2bf3/diff:/var/lib/docker/overlay2/5e940e100bc7df14fbaaca63292506dc23bd933bac16de182b67f8107cfc71b5/diff:/var/lib/docker/overlay2/597b039ba9eb2c747ffeca49b446b61e3121712aac6a1ce013d50a6998c46d93/diff:/var/lib/docker/overlay2/a3668e684110bc4003a16d3fe4a80296a1308a376b08c428b3ae7469edae4b8b/diff:/var/lib/docker/overlay2/617d4351ebaac66d685427e45dfe1cd0a4e0e0ac9dc4ccb7a2f382e1bfc8697d/diff:/var/lib/docker/overlay2/4ac76b
6b6d368a1f1073f4a45b6b80445e04f69010fcc5e524fe8edc2708fd5c/diff:/var/lib/docker/overlay2/155961688c82c43af6d27a734eeeae0fd8eb1766bbb9d2728d834a593622196c/diff:/var/lib/docker/overlay2/8f6b3c33ada50dd91034ae6c41722655db1e7a86bb4b61e1152696c41336aa44/diff:/var/lib/docker/overlay2/39286e41dafe62c271b224fbeaa14b9ca058246bcc76e7a81d75f765a497015e/diff:/var/lib/docker/overlay2/bc0dfc1142718ddc4a235a7a62a371f8d580e48ef41f886bce3bb6598f329ea5/diff:/var/lib/docker/overlay2/285b5808cfef05f77db3330400ad926f089148ece291f130ab7f4c822fa7be5a/diff:/var/lib/docker/overlay2/ac9feea04da985551cdd80f2698f28d116958d31882dc3888245ace574de7021/diff:/var/lib/docker/overlay2/93bc1cd943ffc655fc209b896ce12e6863e5adcbc32cee2830311b118ef17f97/diff:/var/lib/docker/overlay2/e9ca47c898e17aff2b310e13256ac91b8efff61646ca77ebe764817e42c9e278/diff:/var/lib/docker/overlay2/a0c4a393ccf7eb7a3b75e0e421d72749d0641f4e74b689d9b2a1dc9d5f9f2985/diff:/var/lib/docker/overlay2/f3ed5047774399a74e83080908727ed9f42ef86312f5879e724083ee96dc4a98/diff:/var/lib/d
ocker/overlay2/17ad49d1fc1b2eb336aaec3919f67e90045a835b6ad03fa72a6a02f8b2d7a6f9/diff:/var/lib/docker/overlay2/dda0100db23cb2ecb0d502c53e7269e90548d0f6043615cfefea6fd1a42ef67f/diff:/var/lib/docker/overlay2/accfdaeb1d703f222e13547e8fd4c06305c41c6256ac59237b67ac17b702ff5d/diff:/var/lib/docker/overlay2/e4dc6c7d508ce1056ebda7e5bf4239bb48eaa2ad06a4d483e67380212ef84a10/diff:/var/lib/docker/overlay2/d6be635d55a88f802a01d5678467aa3fe46b951550c2d880458b733ff9f56a19/diff:/var/lib/docker/overlay2/d31bed28baf1efe4c8eea0c512e6641cdfa98622cfa1f45f4f463c8e4f0ea9e6/diff:/var/lib/docker/overlay2/4eb064c2961cd60979726e7d7a78d8ac3a96af3b41699c69090b4aec9263e5f7/diff:/var/lib/docker/overlay2/66ec0abca0956048e37f5c5e2125cf299a362b35d473021004316fd83d85d33b/diff:/var/lib/docker/overlay2/5ba45d5dede37c09dccf9193592382ae7516027a675d2631ec64e687b9745c00/diff:/var/lib/docker/overlay2/1ceade4823b29f813f08c3db2bd4d966999ac776084d4b7b054d7220b5689943/diff:/var/lib/docker/overlay2/0e3148261963465326228e3e9b1f52398a9551f01263a9b78bebaa06184
ed2af/diff:/var/lib/docker/overlay2/4d90ca5a45e75e28de2e79390ac1c0075f60b1bbd9446a169ec9c45ca0702256/diff:/var/lib/docker/overlay2/b317057fb455e15ebe8bf80b7713f8ad35aff0405e06e48a935b1965b47214e7/diff:/var/lib/docker/overlay2/ed65d2c3c21872669bade9290359857509ebcf1d7427db303d876c8efdcda07b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d4b4499ac47d542347db71c43f211482af93655a809475206f01a44342ce697/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d4b4499ac47d542347db71c43f211482af93655a809475206f01a44342ce697/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d4b4499ac47d542347db71c43f211482af93655a809475206f01a44342ce697/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20201211203409-6575",
	                "Source": "/var/lib/docker/volumes/functional-20201211203409-6575/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20201211203409-6575",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20201211203409-6575",
	                "name.minikube.sigs.k8s.io": "functional-20201211203409-6575",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e044a9e5f094e3ea81103125e3cd5df7598bca1bb27b105be1394237673e1381",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e044a9e5f094",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20201211203409-6575": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.176"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "58bc21c69ea0"
	                    ],
	                    "NetworkID": "2ba6a3bb9e78d244edbe04ffe18d21902e1b69937cf692d35e313f4e326bfd21",
	                    "EndpointID": "50236cc5e32dbbb7604c3dbacc0c6d09a4aef377041b2ca16a977285eb3f774d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.176",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:b0",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20201211203409-6575 -n functional-20201211203409-6575
helpers_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-20201211203409-6575 -n functional-20201211203409-6575: exit status 2 (432.276315ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:233: status error: exit status 2 (may be ok)
helpers_test.go:238: <<< TestFunctional/parallel/DockerEnv FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestFunctional/parallel/DockerEnv]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-20201211203409-6575 logs -n 25: (10.419321865s)
helpers_test.go:246: TestFunctional/parallel/DockerEnv logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Fri 2020-12-11 20:34:12 UTC, end at Fri 2020-12-11 20:49:57 UTC. --
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[419]: time="2020-12-11T20:49:49.648783587Z" level=info msg="Daemon shutdown complete"
	* Dec 11 20:49:49 functional-20201211203409-6575 systemd[1]: docker.service: Succeeded.
	* Dec 11 20:49:49 functional-20201211203409-6575 systemd[1]: Stopped Docker Application Container Engine.
	* Dec 11 20:49:49 functional-20201211203409-6575 systemd[1]: Starting Docker Application Container Engine...
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.751323271Z" level=info msg="Starting up"
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.754046769Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.754095073Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.754125189Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.754137561Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.755520512Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.755556740Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.755590803Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.755610628Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.784074860Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.801208134Z" level=warning msg="Your kernel does not support swap memory limit"
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.801239416Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.801428820Z" level=info msg="Loading containers: start."
	* Dec 11 20:49:49 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:49.978157888Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Dec 11 20:49:50 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:50.043423993Z" level=info msg="Loading containers: done."
	* Dec 11 20:49:50 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:50.075925151Z" level=info msg="Docker daemon" commit=eeddea2 graphdriver(s)=overlay2 version=20.10.0
	* Dec 11 20:49:50 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:50.076025701Z" level=info msg="Daemon has completed initialization"
	* Dec 11 20:49:50 functional-20201211203409-6575 systemd[1]: Started Docker Application Container Engine.
	* Dec 11 20:49:50 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:50.093712466Z" level=info msg="API listen on [::]:2376"
	* Dec 11 20:49:50 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:50.100401170Z" level=info msg="API listen on /var/run/docker.sock"
	* Dec 11 20:49:57 functional-20201211203409-6575 dockerd[11213]: time="2020-12-11T20:49:57.216101425Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 7c1444a9b9179       10cc881966cfd                                                                   1 second ago        Created             kube-proxy                2                   e098e1821f360
	* e818eecdcc181       bfe3a36ebd252                                                                   1 second ago        Running             coredns                   2                   3f80f75915804
	* f0b82b72ed239       3138b6e3d4712                                                                   3 seconds ago       Running             kube-scheduler            2                   df614e80a7927
	* 4b7908b732031       ca9843d3b5454                                                                   3 seconds ago       Running             kube-apiserver            2                   8573763fc7875
	* d4f2390562640       0369cf4303ffd                                                                   3 seconds ago       Running             etcd                      2                   cc0e5ba870919
	* d45fe18147bf8       nginx@sha256:31de7d2fd0e751685e57339d2b4a4aa175aea922e592d36a7078d72db0a45639   25 seconds ago      Exited              myfrontend                0                   4d5ca6f9382cf
	* a39586facb272       b9fa1895dcaa6                                                                   14 minutes ago      Exited              kube-controller-manager   2                   d17c6644cd86d
	* 3695d9368bd03       ca9843d3b5454                                                                   14 minutes ago      Exited              kube-apiserver            1                   79edd046cdebe
	* aca38d523094c       bfe3a36ebd252                                                                   14 minutes ago      Exited              coredns                   1                   6fe693aaf24a8
	* 16c7b9f843d67       ca9843d3b5454                                                                   14 minutes ago      Exited              kube-apiserver            0                   79edd046cdebe
	* 8b7108e347363       85069258b98ac                                                                   14 minutes ago      Exited              storage-provisioner       2                   4febdb6e3c8a0
	* 1c00bd6c4f854       3138b6e3d4712                                                                   14 minutes ago      Exited              kube-scheduler            1                   ad367d0277ea1
	* 5cbaf0ed3482a       85069258b98ac                                                                   14 minutes ago      Exited              storage-provisioner       1                   4febdb6e3c8a0
	* 7134a03f96166       10cc881966cfd                                                                   14 minutes ago      Exited              kube-proxy                1                   c2f54539ae98b
	* 75f3eaca619d6       0369cf4303ffd                                                                   14 minutes ago      Exited              etcd                      1                   a80d1cb0c205e
	* 
	* ==> coredns [aca38d523094] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	* CoreDNS-1.7.0
	* linux/amd64, go1.14.4, f59c03d
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* E1211 20:35:23.397240       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1211 20:35:23.397375       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1211 20:35:23.397416       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	* E1211 20:35:28.883123       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	* E1211 20:35:28.883212       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	* E1211 20:35:28.883280       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpoints" in API group "" at the cluster scope
	* E1211 20:49:42.502790       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=1156&timeout=7m20s&timeoutSeconds=440&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	* E1211 20:49:42.502799       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=499&timeout=6m24s&timeoutSeconds=384&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	* E1211 20:49:42.502913       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=499&timeout=6m27s&timeoutSeconds=387&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	* 
	* ==> describe nodes <==
	* Name:               functional-20201211203409-6575
	* Roles:              control-plane,master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=functional-20201211203409-6575
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=fc69cfe93e0c46b6d41ab5653129ddf7843209ed
	*                     minikube.k8s.io/name=functional-20201211203409-6575
	*                     minikube.k8s.io/updated_at=2020_12_11T20_34_35_0700
	*                     minikube.k8s.io/version=v1.15.1
	*                     node-role.kubernetes.io/control-plane=
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Fri, 11 Dec 2020 20:34:32 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  functional-20201211203409-6575
	*   AcquireTime:     <unset>
	*   RenewTime:       Fri, 11 Dec 2020 20:50:04 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Fri, 11 Dec 2020 20:49:25 +0000   Fri, 11 Dec 2020 20:34:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Fri, 11 Dec 2020 20:49:25 +0000   Fri, 11 Dec 2020 20:34:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Fri, 11 Dec 2020 20:49:25 +0000   Fri, 11 Dec 2020 20:34:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Fri, 11 Dec 2020 20:49:25 +0000   Fri, 11 Dec 2020 20:35:21 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.49.176
	*   Hostname:    functional-20201211203409-6575
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  309568300Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887000Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 b396ccc06bd140ff84120c753dab8448
	*   System UUID:                e513409d-2196-4295-aad2-74d11329a7e8
	*   Boot ID:                    ff2e882c-ceac-4ec5-a892-a979e1bf648a
	*   Kernel Version:             4.9.0-14-amd64
	*   OS Image:                   Ubuntu 20.04.1 LTS
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://20.10.0
	*   Kubelet Version:            v1.20.0
	*   Kube-Proxy Version:         v1.20.0
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (8 in total)
	*   Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	*   default                     sp-pod                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	*   kube-system                 coredns-74ff55c5b-k7fzr                                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     15m
	*   kube-system                 etcd-functional-20201211203409-6575                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	*   kube-system                 kube-apiserver-functional-20201211203409-6575             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	*   kube-system                 kube-controller-manager-functional-20201211203409-6575    200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	*   kube-system                 kube-proxy-q5kfz                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	*   kube-system                 kube-scheduler-functional-20201211203409-6575             100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	*   kube-system                 storage-provisioner                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (9%)   0 (0%)
	*   memory             170Mi (0%)  170Mi (0%)
	*   ephemeral-storage  100Mi (0%)  0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age   From        Message
	*   ----    ------                   ----  ----        -------
	*   Normal  Starting                 15m   kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  15m   kubelet     Node functional-20201211203409-6575 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    15m   kubelet     Node functional-20201211203409-6575 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     15m   kubelet     Node functional-20201211203409-6575 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             15m   kubelet     Node functional-20201211203409-6575 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  15m   kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                15m   kubelet     Node functional-20201211203409-6575 status is now: NodeReady
	*   Normal  Starting                 15m   kube-proxy  Starting kube-proxy.
	*   Normal  Starting                 14m   kubelet     Starting kubelet.
	*   Normal  Starting                 14m   kube-proxy  Starting kube-proxy.
	*   Normal  NodeHasSufficientMemory  14m   kubelet     Node functional-20201211203409-6575 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    14m   kubelet     Node functional-20201211203409-6575 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     14m   kubelet     Node functional-20201211203409-6575 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             14m   kubelet     Node functional-20201211203409-6575 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  14m   kubelet     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                14m   kubelet     Node functional-20201211203409-6575 status is now: NodeReady
	* 
	* ==> dmesg <==
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 8a 0c 6e 94 e2 33 08 06        ........n..3..
	* [  +5.124598] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +17.388333] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:38] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:39] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +16.296127] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth6d3b9e3d
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff fa eb fa d7 11 74 08 06        ...........t..
	* [  +3.224752] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:40] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:44] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:45] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:46] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +11.810048] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.036872] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +1.015616] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +17.612653] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:47] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +11.140555] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +0.371449] tee (128201): /proc/121134/oom_adj is deprecated, please use /proc/121134/oom_score_adj instead.
	* [ +15.081256] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:48] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +12.348908] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +38.908713] cgroup: cgroup2: unknown option "nsdelegate"
	* [Dec11 20:49] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +47.070477] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [75f3eaca619d] <==
	* 2020-12-11 20:48:51.949089 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-12-11 20:49:01.949103 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-12-11 20:49:03.974536 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (633.038469ms) to execute
	* 2020-12-11 20:49:03.974581 W | etcdserver: request "header:<ID:7961649817326334447 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/functional-20201211203409-6575\" mod_revision:1088 > success:<request_put:<key:\"/registry/leases/kube-node-lease/functional-20201211203409-6575\" value_size:602 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/functional-20201211203409-6575\" > >>" with result "size:16" took too long (103.28244ms) to execute
	* 2020-12-11 20:49:03.976878 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:7 size:36592" took too long (137.938071ms) to execute
	* 2020-12-11 20:49:06.425109 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1129" took too long (1.228918203s) to execute
	* 2020-12-11 20:49:06.425165 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (2.194218506s) to execute
	* 2020-12-11 20:49:06.425210 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy/\" range_end:\"/registry/podsecuritypolicy0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.776854482s) to execute
	* 2020-12-11 20:49:06.425284 W | etcdserver: request "header:<ID:7961649817326334455 > lease_revoke:<id:6e7d765383ca31af>" with result "size:28" took too long (993.378549ms) to execute
	* 2020-12-11 20:49:06.425345 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (2.086547157s) to execute
	* 2020-12-11 20:49:08.135087 W | etcdserver: request "header:<ID:7961649817326334460 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-functional-20201211203409-6575.164fc480ef56e6d4\" mod_revision:1029 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-functional-20201211203409-6575.164fc480ef56e6d4\" value_size:759 lease:7961649817326334458 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-functional-20201211203409-6575.164fc480ef56e6d4\" > >>" with result "size:16" took too long (1.577522155s) to execute
	* 2020-12-11 20:49:08.135459 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (1.701754293s) to execute
	* 2020-12-11 20:49:09.342799 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000143711s) to execute
	* WARNING: 2020/12/11 20:49:09 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-12-11 20:49:09.693480 W | wal: sync duration of 3.136117483s, expected less than 1s
	* 2020-12-11 20:49:09.693856 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (338.990507ms) to execute
	* 2020-12-11 20:49:09.693920 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-functional-20201211203409-6575.164fc480ef56e6d4\" " with result "range_response_count:1 size:883" took too long (340.485923ms) to execute
	* 2020-12-11 20:49:09.694033 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true " with result "range_response_count:0 size:7" took too long (1.500852656s) to execute
	* 2020-12-11 20:49:11.949008 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-12-11 20:49:21.949093 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-12-11 20:49:31.949010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-12-11 20:49:41.948911 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-12-11 20:49:42.517460 N | pkg/osutil: received terminated signal, shutting down...
	* WARNING: 2020/12/11 20:49:42 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 2020-12-11 20:49:42.583723 I | etcdserver: skipped leadership transfer for single voting member cluster
	* 
	* ==> etcd [d4f239056264] <==
	* 2020-12-11 20:49:56.226419 I | embed: initial cluster = 
	* 2020-12-11 20:49:56.244558 I | etcdserver: restarting member 920e61f3e5fa6e7d in cluster 4f917d7f1b762750 at commit index 1459
	* raft2020/12/11 20:49:56 INFO: 920e61f3e5fa6e7d switched to configuration voters=()
	* raft2020/12/11 20:49:56 INFO: 920e61f3e5fa6e7d became follower at term 3
	* raft2020/12/11 20:49:56 INFO: newRaft 920e61f3e5fa6e7d [peers: [], term: 3, commit: 1459, applied: 0, lastindex: 1459, lastterm: 3]
	* 2020-12-11 20:49:56.246435 W | auth: simple token is not cryptographically signed
	* 2020-12-11 20:49:56.247927 I | mvcc: restore compact to 734
	* 2020-12-11 20:49:56.254086 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	* raft2020/12/11 20:49:56 INFO: 920e61f3e5fa6e7d switched to configuration voters=(10524457079374769789)
	* 2020-12-11 20:49:56.254751 I | etcdserver/membership: added member 920e61f3e5fa6e7d [https://192.168.49.176:2380] to cluster 4f917d7f1b762750
	* 2020-12-11 20:49:56.254906 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-12-11 20:49:56.254952 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-12-11 20:49:56.256924 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-12-11 20:49:56.257046 I | embed: listening for peers on 192.168.49.176:2380
	* 2020-12-11 20:49:56.257230 I | embed: listening for metrics on http://127.0.0.1:2381
	* raft2020/12/11 20:49:57 INFO: 920e61f3e5fa6e7d is starting a new election at term 3
	* raft2020/12/11 20:49:57 INFO: 920e61f3e5fa6e7d became candidate at term 4
	* raft2020/12/11 20:49:57 INFO: 920e61f3e5fa6e7d received MsgVoteResp from 920e61f3e5fa6e7d at term 4
	* raft2020/12/11 20:49:57 INFO: 920e61f3e5fa6e7d became leader at term 4
	* raft2020/12/11 20:49:57 INFO: raft.node: 920e61f3e5fa6e7d elected leader 920e61f3e5fa6e7d at term 4
	* 2020-12-11 20:49:57.785543 I | etcdserver: published {Name:functional-20201211203409-6575 ClientURLs:[https://192.168.49.176:2379]} to cluster 4f917d7f1b762750
	* 2020-12-11 20:49:57.785576 I | embed: ready to serve client requests
	* 2020-12-11 20:49:57.785808 I | embed: ready to serve client requests
	* 2020-12-11 20:49:57.789397 I | embed: serving client requests on 192.168.49.176:2379
	* 2020-12-11 20:49:57.824716 I | embed: serving client requests on 127.0.0.1:2379
	* 
	* ==> kernel <==
	*  20:50:05 up 32 min,  0 users,  load average: 7.38, 5.50, 3.20
	* Linux functional-20201211203409-6575 4.9.0-14-amd64 #1 SMP Debian 4.9.240-2 (2020-10-30) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
	* 
	* ==> kube-apiserver [16c7b9f843d6] <==
	* Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
	* I1211 20:35:23.110061       1 server.go:632] external host was not specified, using 192.168.49.176
	* I1211 20:35:23.110594       1 server.go:182] Version: v1.20.0
	* Error: failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use
	* 
	* ==> kube-apiserver [3695d9368bd0] <==
	* I1211 20:49:42.592106       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1211 20:49:42.592167       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:49:42.592225       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:49:42.592277       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1211 20:49:42.592320       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1211 20:49:42.592332       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:49:42.592381       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:49:42.592435       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1211 20:49:42.592444       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1211 20:49:42.592500       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:49:42.592572       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:49:42.592577       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1211 20:49:42.592670       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* I1211 20:49:42.592715       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* I1211 20:49:42.592790       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* I1211 20:49:42.592827       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1211 20:49:42.592867       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:49:42.592921       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1211 20:49:42.592942       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1211 20:49:42.592967       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:49:42.593001       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* I1211 20:49:42.593066       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	* W1211 20:49:42.593079       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:49:42.593152       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* W1211 20:49:42.593363       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	* 
	* ==> kube-apiserver [4b7908b73203] <==
	* I1211 20:50:04.038144       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	* I1211 20:50:04.038154       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	* I1211 20:50:04.038825       1 available_controller.go:475] Starting AvailableConditionController
	* I1211 20:50:04.038851       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	* I1211 20:50:04.038874       1 controller.go:83] Starting OpenAPI AggregationController
	* E1211 20:50:04.040775       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.176, ResourceVersion: 0, AdditionalErrorMsg: 
	* I1211 20:50:04.041084       1 establishing_controller.go:76] Starting EstablishingController
	* I1211 20:50:04.041455       1 controller.go:86] Starting OpenAPI controller
	* I1211 20:50:04.041490       1 naming_controller.go:291] Starting NamingConditionController
	* I1211 20:50:04.041595       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	* I1211 20:50:04.041626       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	* I1211 20:50:04.041641       1 crd_finalizer.go:266] Starting CRDFinalizer
	* I1211 20:50:04.041813       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I1211 20:50:04.041851       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* I1211 20:50:04.284559       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	* I1211 20:50:04.297249       1 shared_informer.go:247] Caches are synced for node_authorizer 
	* I1211 20:50:04.379168       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1211 20:50:04.379185       1 cache.go:39] Caches are synced for autoregister controller
	* I1211 20:50:04.379272       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	* I1211 20:50:04.379306       1 apf_controller.go:253] Running API Priority and Fairness config worker
	* I1211 20:50:04.379318       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1211 20:50:04.391673       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* I1211 20:50:05.030752       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I1211 20:50:05.030793       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I1211 20:50:05.044232       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	* 
	* ==> kube-controller-manager [a39586facb27] <==
	* I1211 20:35:41.281789       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	* I1211 20:35:41.290069       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	* I1211 20:35:41.296716       1 shared_informer.go:247] Caches are synced for persistent volume 
	* I1211 20:35:41.297935       1 shared_informer.go:247] Caches are synced for ReplicationController 
	* I1211 20:35:41.302478       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	* I1211 20:35:41.302509       1 shared_informer.go:247] Caches are synced for HPA 
	* I1211 20:35:41.302536       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	* I1211 20:35:41.305392       1 shared_informer.go:247] Caches are synced for endpoint 
	* I1211 20:35:41.316731       1 shared_informer.go:247] Caches are synced for node 
	* I1211 20:35:41.316770       1 range_allocator.go:172] Starting range CIDR allocator
	* I1211 20:35:41.316776       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	* I1211 20:35:41.316782       1 shared_informer.go:247] Caches are synced for cidrallocator 
	* I1211 20:35:41.379207       1 shared_informer.go:247] Caches are synced for expand 
	* I1211 20:35:41.379209       1 shared_informer.go:247] Caches are synced for PV protection 
	* I1211 20:35:41.382975       1 shared_informer.go:247] Caches are synced for namespace 
	* I1211 20:35:41.391899       1 shared_informer.go:247] Caches are synced for attach detach 
	* I1211 20:35:41.451594       1 shared_informer.go:247] Caches are synced for service account 
	* I1211 20:35:41.515580       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1211 20:35:41.560850       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1211 20:35:41.673618       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1211 20:35:41.951161       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1211 20:35:41.951189       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1211 20:35:41.973855       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1211 20:49:09.815402       1 event.go:291] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	* I1211 20:49:09.815720       1 event.go:291] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	* 
	* ==> kube-proxy [7134a03f9616] <==
	* I1211 20:35:21.003865       1 node.go:172] Successfully retrieved node IP: 192.168.49.176
	* I1211 20:35:21.003929       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.176), assume IPv4 operation
	* W1211 20:35:21.181173       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	* I1211 20:35:21.181292       1 server_others.go:185] Using iptables Proxier.
	* I1211 20:35:21.181645       1 server.go:650] Version: v1.20.0
	* I1211 20:35:21.182149       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I1211 20:35:21.182441       1 config.go:315] Starting service config controller
	* I1211 20:35:21.182462       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1211 20:35:21.183013       1 config.go:224] Starting endpoint slice config controller
	* I1211 20:35:21.183033       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1211 20:35:21.282527       1 shared_informer.go:247] Caches are synced for service config 
	* I1211 20:35:21.283185       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* 
	* ==> kube-scheduler [1c00bd6c4f85] <==
	* I1211 20:35:15.370995       1 serving.go:331] Generated self-signed cert in-memory
	* W1211 20:35:20.801601       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1211 20:35:20.801639       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	* W1211 20:35:20.801653       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1211 20:35:20.801665       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1211 20:35:20.991419       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1211 20:35:20.991578       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1211 20:35:20.991591       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1211 20:35:20.991611       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1211 20:35:21.191645       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* E1211 20:35:28.785123       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	* E1211 20:35:28.785257       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	* E1211 20:35:28.785339       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	* E1211 20:35:28.785400       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	* E1211 20:35:28.785459       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	* E1211 20:35:28.785536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	* E1211 20:35:28.785632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	* E1211 20:35:28.785687       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	* E1211 20:35:28.785761       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	* E1211 20:35:28.786044       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	* E1211 20:35:28.786209       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	* 
	* ==> kube-scheduler [f0b82b72ed23] <==
	* I1211 20:49:58.326134       1 serving.go:331] Generated self-signed cert in-memory
	* W1211 20:50:04.183777       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	* W1211 20:50:04.183822       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	* W1211 20:50:04.183859       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	* W1211 20:50:04.183872       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	* I1211 20:50:04.302605       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1211 20:50:04.302884       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I1211 20:50:04.304833       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	* I1211 20:50:04.304993       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I1211 20:50:04.406696       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2020-12-11 20:34:12 UTC, end at Fri 2020-12-11 20:50:06 UTC. --
	* Dec 11 20:49:55 functional-20201211203409-6575 kubelet[5983]: E1211 20:49:55.280183    5983 kubelet_node_status.go:434] Unable to update node status: update node status exceeds retry count
	* Dec 11 20:49:55 functional-20201211203409-6575 kubelet[5983]: W1211 20:49:55.405552    5983 status_manager.go:550] Failed to get status for pod "kube-scheduler-functional-20201211203409-6575_kube-system(3478da2c440ba32fb6c087b3f3b99813)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20201211203409-6575": dial tcp 192.168.49.176:8441: connect: connection refused
	* Dec 11 20:49:55 functional-20201211203409-6575 kubelet[5983]: W1211 20:49:55.412735    5983 pod_container_deletor.go:79] Container "cf93c31688c20773eb7ba51d5bea6e8d6dfade986899e0a594889c96baae8bb5" not found in pod's containers
	* Dec 11 20:49:55 functional-20201211203409-6575 kubelet[5983]: I1211 20:49:55.412771    5983 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8f47c45841ce181a16c5266f71b5f0d38749997a17e8eb43c89ca10d3621c48a
	* Dec 11 20:49:55 functional-20201211203409-6575 kubelet[5983]: W1211 20:49:55.413354    5983 status_manager.go:550] Failed to get status for pod "coredns-74ff55c5b-k7fzr_kube-system(72e36272-437d-408d-9e87-64b60d07ce98)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-74ff55c5b-k7fzr": dial tcp 192.168.49.176:8441: connect: connection refused
	* Dec 11 20:49:55 functional-20201211203409-6575 kubelet[5983]: W1211 20:49:55.987494    5983 pod_container_deletor.go:79] Container "8573763fc7875c483b0a8a006c58a1fadc97e49ce91dd4bcc7e50be44f3e1819" not found in pod's containers
	* Dec 11 20:49:57 functional-20201211203409-6575 kubelet[5983]: W1211 20:49:57.186425    5983 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-k7fzr through plugin: invalid network status for
	* Dec 11 20:49:57 functional-20201211203409-6575 kubelet[5983]: W1211 20:49:57.204752    5983 pod_container_deletor.go:79] Container "3f80f75915804e32a621e5b339c3e9ef045de5aac42127dce0ebeac80601b1e9" not found in pod's containers
	* Dec 11 20:49:57 functional-20201211203409-6575 kubelet[5983]: W1211 20:49:57.227252    5983 pod_container_deletor.go:79] Container "df614e80a79271694b3be185cb82f4572a0a2598b20e073170f641c5e364ccce" not found in pod's containers
	* Dec 11 20:49:57 functional-20201211203409-6575 kubelet[5983]: W1211 20:49:57.294867    5983 pod_container_deletor.go:79] Container "cc0e5ba8709192873ec0be27a0f3b198e3a203ad9ab9cf85a0723b757f806d72" not found in pod's containers
	* Dec 11 20:49:58 functional-20201211203409-6575 kubelet[5983]: W1211 20:49:58.882497    5983 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-k7fzr through plugin: invalid network status for
	* Dec 11 20:50:01 functional-20201211203409-6575 kubelet[5983]: W1211 20:50:01.136749    5983 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sp-pod through plugin: invalid network status for
	* Dec 11 20:50:01 functional-20201211203409-6575 kubelet[5983]: W1211 20:50:01.180646    5983 pod_container_deletor.go:79] Container "14f05e6d88a69c69212e7b15dc910d61277d8fe3506707f0eb66368887189509" not found in pod's containers
	* Dec 11 20:50:02 functional-20201211203409-6575 kubelet[5983]: W1211 20:50:02.192541    5983 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sp-pod through plugin: invalid network status for
	* Dec 11 20:50:03 functional-20201211203409-6575 kubelet[5983]: W1211 20:50:03.066460    5983 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	* Dec 11 20:50:03 functional-20201211203409-6575 kubelet[5983]: W1211 20:50:03.075534    5983 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
	* Dec 11 20:50:03 functional-20201211203409-6575 kubelet[5983]: W1211 20:50:03.212804    5983 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sp-pod through plugin: invalid network status for
	* Dec 11 20:50:04 functional-20201211203409-6575 kubelet[5983]: E1211 20:50:04.060327    5983 desired_state_of_world_populator.go:338] Error processing volume "mypd" for pod "sp-pod_default(060a6c4d-192c-4e3e-831b-21a4583783ad)": error processing PVC default/myclaim: failed to fetch PVC from API server: persistentvolumeclaims "myclaim" is forbidden: User "system:node:functional-20201211203409-6575" cannot get resource "persistentvolumeclaims" in API group "" in the namespace "default": no relationship found between node 'functional-20201211203409-6575' and this object
	* Dec 11 20:50:04 functional-20201211203409-6575 kubelet[5983]: E1211 20:50:04.063282    5983 reflector.go:138] object-"kube-system"/"storage-provisioner-token-2p824": Failed to watch *v1.Secret: unknown (get secrets)
	* Dec 11 20:50:04 functional-20201211203409-6575 kubelet[5983]: E1211 20:50:04.079585    5983 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	* Dec 11 20:50:04 functional-20201211203409-6575 kubelet[5983]: E1211 20:50:04.079837    5983 reflector.go:138] object-"kube-system"/"kube-proxy-token-v2kqp": Failed to watch *v1.Secret: unknown (get secrets)
	* Dec 11 20:50:04 functional-20201211203409-6575 kubelet[5983]: E1211 20:50:04.063553    5983 reflector.go:138] object-"default"/"default-token-v5cg7": Failed to watch *v1.Secret: unknown (get secrets)
	* Dec 11 20:50:04 functional-20201211203409-6575 kubelet[5983]: E1211 20:50:04.063591    5983 reflector.go:138] object-"kube-system"/"coredns-token-ljmwt": Failed to watch *v1.Secret: unknown (get secrets)
	* Dec 11 20:50:04 functional-20201211203409-6575 kubelet[5983]: E1211 20:50:04.063598    5983 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	* Dec 11 20:50:04 functional-20201211203409-6575 kubelet[5983]: W1211 20:50:04.487556    5983 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/sp-pod through plugin: invalid network status for
	* 
	* ==> storage-provisioner [5cbaf0ed3482] <==
	* I1211 20:35:14.513397       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* F1211 20:35:20.795704       1 main.go:39] error getting server version: unknown
	* 
	* ==> storage-provisioner [8b7108e34736] <==
	* I1211 20:35:22.351583       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
	* I1211 20:35:22.393421       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
	* I1211 20:35:22.393479       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1211 20:35:39.929076       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1211 20:35:39.929204       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_functional-20201211203409-6575_71c27561-1a32-49a6-96cd-63467dd78b8d!
	* I1211 20:35:39.929228       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"474771ee-07d6-4aef-9a1f-174929359ea7", APIVersion:"v1", ResourceVersion:"526", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20201211203409-6575_71c27561-1a32-49a6-96cd-63467dd78b8d became leader
	* I1211 20:35:40.029523       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_functional-20201211203409-6575_71c27561-1a32-49a6-96cd-63467dd78b8d!
	* I1211 20:49:09.816636       1 controller.go:1284] provision "default/myclaim" class "standard": started
	* I1211 20:49:09.822785       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"783fe50e-268d-4504-8bad-954741763578", APIVersion:"v1", ResourceVersion:"1100", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	* I1211 20:49:09.821266       1 storage_provisioner.go:60] Provisioning volume {&StorageClass{ObjectMeta:{standard    ccd07b28-7185-4d56-b064-52e407c4aeb1 436 0 2020-12-11 20:34:54 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	*  storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2020-12-11 20:34:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 44 34 102 58 115 116 111 114 97 103 101 99 108 97 115 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 105 115 45 100 101 102 97 117 108 116 45 99 108 97 115 115 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 111 110 109 97 110 97 103 101 114 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 111 100 101 34 58 123 125 125 125 44 34 102 58 112 114 111 118 105 115 105 111 110 101 114 34 58 123 125 44 34 102 58 114 101 99 108
97 105 109 80 111 108 105 99 121 34 58 123 125 44 34 102 58 118 111 108 117 109 101 66 105 110 100 105 110 103 77 111 100 101 34 58 123 125 125],}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-783fe50e-268d-4504-8bad-954741763578 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  783fe50e-268d-4504-8bad-954741763578 1100 0 2020-12-11 20:49:09 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	*  volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2020-12-11 20:49:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 118 111 108 117 109 101 46 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 112 114 111 118 105 115 105 111 110 101 114 34 58 123 125 125 125 125],}} {kubectl-client-side-apply Update v1 2020-12-11 20:49:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58
97 99 99 101 115 115 77 111 100 101 115 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 34 102 58 114 101 113 117 101 115 116 115 34 58 123 34 46 34 58 123 125 44 34 102 58 115 116 111 114 97 103 101 34 58 123 125 125 125 44 34 102 58 118 111 108 117 109 101 77 111 100 101 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 112 104 97 115 101 34 58 123 125 125 125],}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	* I1211 20:49:09.823752       1 controller.go:1392] provision "default/myclaim" class "standard": volume "pvc-783fe50e-268d-4504-8bad-954741763578" provisioned
	* I1211 20:49:09.823832       1 controller.go:1409] provision "default/myclaim" class "standard": succeeded
	* I1211 20:49:09.823855       1 volume_store.go:212] Trying to save persistentvolume "pvc-783fe50e-268d-4504-8bad-954741763578"
	* I1211 20:49:09.844778       1 volume_store.go:219] persistentvolume "pvc-783fe50e-268d-4504-8bad-954741763578" saved
	* I1211 20:49:09.845316       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"783fe50e-268d-4504-8bad-954741763578", APIVersion:"v1", ResourceVersion:"1100", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-783fe50e-268d-4504-8bad-954741763578

                                                
                                                
-- /stdout --
** stderr ** 
	E1211 20:50:04.873755  171689 out.go:318] unable to execute * 2020-12-11 20:49:03.974581 W | etcdserver: request "header:<ID:7961649817326334447 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/functional-20201211203409-6575\" mod_revision:1088 > success:<request_put:<key:\"/registry/leases/kube-node-lease/functional-20201211203409-6575\" value_size:602 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/functional-20201211203409-6575\" > >>" with result "size:16" took too long (103.28244ms) to execute
	: html/template:* 2020-12-11 20:49:03.974581 W | etcdserver: request "header:<ID:7961649817326334447 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/functional-20201211203409-6575\" mod_revision:1088 > success:<request_put:<key:\"/registry/leases/kube-node-lease/functional-20201211203409-6575\" value_size:602 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/functional-20201211203409-6575\" > >>" with result "size:16" took too long (103.28244ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1211 20:50:04.892399  171689 out.go:318] unable to execute * 2020-12-11 20:49:08.135087 W | etcdserver: request "header:<ID:7961649817326334460 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-functional-20201211203409-6575.164fc480ef56e6d4\" mod_revision:1029 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-functional-20201211203409-6575.164fc480ef56e6d4\" value_size:759 lease:7961649817326334458 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-functional-20201211203409-6575.164fc480ef56e6d4\" > >>" with result "size:16" took too long (1.577522155s) to execute
	: html/template:* 2020-12-11 20:49:08.135087 W | etcdserver: request "header:<ID:7961649817326334460 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-functional-20201211203409-6575.164fc480ef56e6d4\" mod_revision:1029 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-functional-20201211203409-6575.164fc480ef56e6d4\" value_size:759 lease:7961649817326334458 >> failure:<request_range:<key:\"/registry/events/kube-system/kube-apiserver-functional-20201211203409-6575.164fc480ef56e6d4\" > >>" with result "size:16" took too long (1.577522155s) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.
	E1211 20:50:06.409625  171689 out.go:313] unable to parse "*  volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2020-12-11 20:49:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 118 111 108 117 109 101 46 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 112 114 111 118 105 115 105 111 110 101 114 34 58 123 125 125 125 125],}} {kubectl-client-side-apply Update v1 2020-12-11 20:49:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123
125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 97 99 99 101 115 115 77 111 100 101 115 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 34 102 58 114 101 113 117 101 115 116 115 34 58 123 34 46 34 58 123 125 44 34 102 58 115 116 111 114 97 103 101 34 58 123 125 125 125 44 34 102 58 118 111 108 117 109 101 77 111 100 101 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 112 104 97 115 101 34 58 123 125 125 125],}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim\n": template: *  volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-
hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2020-12-11 20:49:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 118 111 108 117 109 101 46 98 101 116 97 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 115 116 111 114 97 103 101 45 112 114 111 118 105 115 105 111 110 101 114 34 58 123 125 125 125 125],}} {kubectl-client-side-apply Update v1 2020-12-11 20:49:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 97 99 99 101 115 115 77 111 100 101 115 34 58 123 125 44 34 102 58 114 10
1 115 111 117 114 99 101 115 34 58 123 34 102 58 114 101 113 117 101 115 116 115 34 58 123 34 46 34 58 123 125 44 34 102 58 115 116 111 114 97 103 101 34 58 123 125 125 125 44 34 102 58 118 111 108 117 109 101 77 111 100 101 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 112 104 97 115 101 34 58 123 125 125 125],}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	:1: unexpected "}" in operand - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-20201211203409-6575 -n functional-20201211203409-6575
helpers_test.go:255: (dbg) Run:  kubectl --context functional-20201211203409-6575 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestFunctional/parallel/DockerEnv]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context functional-20201211203409-6575 describe pod 
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context functional-20201211203409-6575 describe pod : exit status 1 (84.938998ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context functional-20201211203409-6575 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv (25.27s)

                                                
                                    

Test pass (201/213)

Order passed test Duration
4 TestDownloadOnly/crio/v1.13.0/check_json_events 0
6 TestDownloadOnly/crio/v1.20.0/check_json_events 0
8 TestDownloadOnly/crio/v1.20.0#01/check_json_events 0
9 TestDownloadOnly/crio/DeleteAll 0.46
10 TestDownloadOnly/crio/DeleteAlwaysSucceeds 0.29
13 TestDownloadOnly/docker/v1.13.0/check_json_events 0
15 TestDownloadOnly/docker/v1.20.0/check_json_events 0
17 TestDownloadOnly/docker/v1.20.0#01/check_json_events 0
18 TestDownloadOnly/docker/DeleteAll 0.43
19 TestDownloadOnly/docker/DeleteAlwaysSucceeds 0.26
22 TestDownloadOnly/containerd/v1.13.0/check_json_events 0
24 TestDownloadOnly/containerd/v1.20.0/check_json_events 0
26 TestDownloadOnly/containerd/v1.20.0#01/check_json_events 0
27 TestDownloadOnly/containerd/DeleteAll 0.45
28 TestDownloadOnly/containerd/DeleteAlwaysSucceeds 0.28
29 TestDownloadOnlyKic 10.34
32 TestOffline/group/docker 80.87
33 TestOffline/group/crio 110.47
34 TestOffline/group/containerd 102.52
37 TestAddons/parallel/Registry 23.68
38 TestAddons/parallel/Ingress 17.53
39 TestAddons/parallel/MetricsServer 16.84
40 TestAddons/parallel/HelmTiller 10.89
42 TestAddons/parallel/CSI 58.77
43 TestAddons/parallel/GCPAuth 23.46
44 TestCertOptions 54.27
45 TestDockerFlags 55.88
46 TestForceSystemdFlag 48.77
47 TestForceSystemdEnv 52.68
48 TestKVMDriverInstallOrUpdate 6.58
51 TestErrorSpam 53.91
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 46.32
56 TestFunctional/serial/SoftStart 3.89
57 TestFunctional/serial/KubeContext 0.05
58 TestFunctional/serial/KubectlGetPods 0.26
61 TestFunctional/serial/CacheCmd/cache/add_remote 4.22
62 TestFunctional/serial/CacheCmd/cache/add_local 0.84
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
64 TestFunctional/serial/CacheCmd/cache/list 0.06
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.08
67 TestFunctional/serial/CacheCmd/cache/delete 0.12
68 TestFunctional/serial/MinikubeKubectlCmd 0.36
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.36
70 TestFunctional/serial/ExtraConfig 24.38
75 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
76 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
79 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
80 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
83 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
84 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
87 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
88 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
89 TestJSONOutputError 0.5
92 TestMultiNode/serial/FreshStart2Nodes 78.09
93 TestMultiNode/serial/AddNode 17.89
94 TestMultiNode/serial/StopNode 2.77
95 TestMultiNode/serial/StartAfterStop 27.11
96 TestMultiNode/serial/DeleteNode 5.97
97 TestMultiNode/serial/StopMultiNode 12.69
98 TestMultiNode/serial/RestartMultiNode 87.6
102 TestPreload 227.02
104 TestScheduledStopUnix 62.79
107 TestInsufficientStorage 11.88
108 TestRunningBinaryUpgrade 98.28
109 TestStoppedBinaryUpgrade 92.3
110 TestKubernetesUpgrade 149.82
111 TestMissingContainerUpgrade 350.07
113 TestPause/serial/Start 73.48
115 TestFunctional/parallel/ComponentHealth 0.28
116 TestFunctional/parallel/ConfigCmd 0.39
117 TestFunctional/parallel/DashboardCmd 5.47
118 TestFunctional/parallel/DryRun 0.74
119 TestFunctional/parallel/StatusCmd 1.14
120 TestFunctional/parallel/LogsCmd 3.26
121 TestFunctional/parallel/MountCmd 8.67
123 TestFunctional/parallel/ServiceCmd 18.81
124 TestFunctional/parallel/AddonsCmd 0.18
125 TestFunctional/parallel/PersistentVolumeClaim 42.84
127 TestFunctional/parallel/SSHCmd 0.88
128 TestFunctional/parallel/MySQL 34.85
129 TestFunctional/parallel/FileSync 0.35
130 TestFunctional/parallel/CertSync 1.08
133 TestFunctional/parallel/NodeLabels 0.1
145 TestPause/serial/SecondStartNoReconfiguration 15.4
146 TestPause/serial/Pause 0.64
147 TestPause/serial/VerifyStatus 0.39
148 TestPause/serial/Unpause 0.67
149 TestPause/serial/PauseAgain 1.01
150 TestPause/serial/DeletePaused 3.75
151 TestPause/serial/VerifyDeletedResources 1.4
158 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
159 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
160 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
162 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
164 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
165 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
169 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
170 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
171 TestFunctional/parallel/ProfileCmd/profile_list 0.4
172 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
173 TestNetworkPlugins/group/auto/Start 60.23
174 TestNetworkPlugins/group/false/Start 59.63
175 TestNetworkPlugins/group/cilium/Start 88.41
176 TestNetworkPlugins/group/false/KubeletFlags 0.4
177 TestNetworkPlugins/group/auto/KubeletFlags 0.38
178 TestNetworkPlugins/group/false/NetCatPod 9.64
179 TestNetworkPlugins/group/auto/NetCatPod 10.64
180 TestNetworkPlugins/group/false/DNS 0.23
181 TestNetworkPlugins/group/false/Localhost 0.2
182 TestNetworkPlugins/group/false/HairPin 5.25
183 TestNetworkPlugins/group/auto/DNS 0.23
184 TestNetworkPlugins/group/auto/Localhost 0.2
185 TestNetworkPlugins/group/auto/HairPin 5.2
186 TestNetworkPlugins/group/calico/Start 97.26
187 TestNetworkPlugins/group/custom-weave/Start 66.26
188 TestNetworkPlugins/group/cilium/ControllerPod 8.06
189 TestNetworkPlugins/group/cilium/KubeletFlags 1.33
190 TestNetworkPlugins/group/cilium/NetCatPod 12.53
191 TestNetworkPlugins/group/cilium/DNS 0.25
192 TestNetworkPlugins/group/cilium/Localhost 0.3
193 TestNetworkPlugins/group/cilium/HairPin 0.28
194 TestNetworkPlugins/group/enable-default-cni/Start 59.75
195 TestNetworkPlugins/group/kindnet/Start 66.65
196 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.45
197 TestNetworkPlugins/group/custom-weave/NetCatPod 18.97
198 TestNetworkPlugins/group/bridge/Start 57.19
199 TestNetworkPlugins/group/calico/ControllerPod 5.02
200 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
201 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.37
202 TestNetworkPlugins/group/calico/KubeletFlags 0.37
203 TestNetworkPlugins/group/calico/NetCatPod 10.74
204 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
205 TestNetworkPlugins/group/enable-default-cni/Localhost 0.26
206 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
207 TestNetworkPlugins/group/calico/DNS 0.51
208 TestNetworkPlugins/group/calico/Localhost 0.27
209 TestNetworkPlugins/group/calico/HairPin 0.24
210 TestNetworkPlugins/group/kubenet/Start 266.12
212 TestStartStop/group/old-k8s-version/serial/FirstStart 107.93
213 TestNetworkPlugins/group/kindnet/ControllerPod 5.22
214 TestNetworkPlugins/group/kindnet/KubeletFlags 0.57
215 TestNetworkPlugins/group/kindnet/NetCatPod 16.56
216 TestNetworkPlugins/group/kindnet/DNS 0.26
217 TestNetworkPlugins/group/kindnet/Localhost 0.23
218 TestNetworkPlugins/group/kindnet/HairPin 0.27
219 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
220 TestNetworkPlugins/group/bridge/NetCatPod 10.83
222 TestStartStop/group/crio/serial/FirstStart 143.75
223 TestNetworkPlugins/group/bridge/DNS 0.19
224 TestNetworkPlugins/group/bridge/Localhost 0.19
225 TestNetworkPlugins/group/bridge/HairPin 0.2
227 TestStartStop/group/embed-certs/serial/FirstStart 52.3
228 TestStartStop/group/embed-certs/serial/DeployApp 10.48
229 TestStartStop/group/embed-certs/serial/Stop 11.24
230 TestStartStop/group/old-k8s-version/serial/DeployApp 9.45
231 TestStartStop/group/old-k8s-version/serial/Stop 11.05
232 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.36
233 TestStartStop/group/embed-certs/serial/SecondStart 23.25
234 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
235 TestStartStop/group/old-k8s-version/serial/SecondStart 28.38
236 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 19.02
237 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 19.02
238 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.01
239 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
240 TestStartStop/group/embed-certs/serial/Pause 3.41
241 TestStartStop/group/crio/serial/DeployApp 12.51
243 TestStartStop/group/containerd/serial/FirstStart 83.28
244 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.01
245 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.34
246 TestStartStop/group/old-k8s-version/serial/Pause 3.32
247 TestStartStop/group/crio/serial/Stop 24.7
249 TestStartStop/group/newest-cni/serial/FirstStart 53.05
250 TestStartStop/group/crio/serial/EnableAddonAfterStop 0.33
251 TestStartStop/group/crio/serial/SecondStart 46.88
252 TestStartStop/group/newest-cni/serial/DeployApp 0
253 TestStartStop/group/newest-cni/serial/Stop 11.12
254 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
255 TestStartStop/group/newest-cni/serial/SecondStart 34.27
256 TestStartStop/group/containerd/serial/DeployApp 9.71
257 TestStartStop/group/crio/serial/UserAppExistsAfterStop 5.02
258 TestStartStop/group/crio/serial/AddonExistsAfterStop 5.01
259 TestNetworkPlugins/group/kubenet/KubeletFlags 0.37
260 TestNetworkPlugins/group/kubenet/NetCatPod 9.36
261 TestStartStop/group/containerd/serial/Stop 21.29
262 TestStartStop/group/crio/serial/VerifyKubernetesImages 0.46
263 TestStartStop/group/crio/serial/Pause 3.69
264 TestNetworkPlugins/group/kubenet/DNS 0.31
265 TestNetworkPlugins/group/kubenet/Localhost 0.21
266 TestNetworkPlugins/group/kubenet/HairPin 0.22
267 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
268 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
269 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
270 TestStartStop/group/newest-cni/serial/Pause 3.61
271 TestStartStop/group/containerd/serial/EnableAddonAfterStop 0.26
272 TestStartStop/group/containerd/serial/SecondStart 21.52
273 TestStartStop/group/containerd/serial/UserAppExistsAfterStop 19.02
274 TestStartStop/group/containerd/serial/AddonExistsAfterStop 5.01
275 TestStartStop/group/containerd/serial/VerifyKubernetesImages 0.32
276 TestStartStop/group/containerd/serial/Pause 3.16
x
+
TestDownloadOnly/crio/v1.13.0/check_json_events (0s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.13.0/check_json_events
--- PASS: TestDownloadOnly/crio/v1.13.0/check_json_events (0.00s)

                                                
                                    
x
+
TestDownloadOnly/crio/v1.20.0/check_json_events (0s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.20.0/check_json_events
--- PASS: TestDownloadOnly/crio/v1.20.0/check_json_events (0.00s)

                                                
                                    
x
+
TestDownloadOnly/crio/v1.20.0#01/check_json_events (0s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.20.0#01/check_json_events
--- PASS: TestDownloadOnly/crio/v1.20.0#01/check_json_events (0.00s)

                                                
                                    
x
+
TestDownloadOnly/crio/DeleteAll (0.46s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/DeleteAll
aaa_download_only_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/crio/DeleteAll (0.46s)

                                                
                                    
x
+
TestDownloadOnly/crio/DeleteAlwaysSucceeds (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/DeleteAlwaysSucceeds
aaa_download_only_test.go:161: (dbg) Run:  out/minikube-linux-amd64 delete -p crio-20201211202751-6575
--- PASS: TestDownloadOnly/crio/DeleteAlwaysSucceeds (0.29s)

                                                
                                    
x
+
TestDownloadOnly/docker/v1.13.0/check_json_events (0s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.13.0/check_json_events
--- PASS: TestDownloadOnly/docker/v1.13.0/check_json_events (0.00s)

                                                
                                    
x
+
TestDownloadOnly/docker/v1.20.0/check_json_events (0s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.20.0/check_json_events
--- PASS: TestDownloadOnly/docker/v1.20.0/check_json_events (0.00s)

                                                
                                    
x
+
TestDownloadOnly/docker/v1.20.0#01/check_json_events (0s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.20.0#01/check_json_events
--- PASS: TestDownloadOnly/docker/v1.20.0#01/check_json_events (0.00s)

                                                
                                    
x
+
TestDownloadOnly/docker/DeleteAll (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/DeleteAll
aaa_download_only_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/docker/DeleteAll (0.43s)

                                                
                                    
x
+
TestDownloadOnly/docker/DeleteAlwaysSucceeds (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/DeleteAlwaysSucceeds
aaa_download_only_test.go:161: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-20201211202807-6575
--- PASS: TestDownloadOnly/docker/DeleteAlwaysSucceeds (0.26s)

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.13.0/check_json_events (0s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.13.0/check_json_events
--- PASS: TestDownloadOnly/containerd/v1.13.0/check_json_events (0.00s)

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.20.0/check_json_events (0s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.20.0/check_json_events
--- PASS: TestDownloadOnly/containerd/v1.20.0/check_json_events (0.00s)

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.20.0#01/check_json_events (0s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.20.0#01/check_json_events
--- PASS: TestDownloadOnly/containerd/v1.20.0#01/check_json_events (0.00s)

                                                
                                    
x
+
TestDownloadOnly/containerd/DeleteAll (0.45s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/DeleteAll
aaa_download_only_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/containerd/DeleteAll (0.45s)

                                                
                                    
x
+
TestDownloadOnly/containerd/DeleteAlwaysSucceeds (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/DeleteAlwaysSucceeds
aaa_download_only_test.go:161: (dbg) Run:  out/minikube-linux-amd64 delete -p containerd-20201211202818-6575
--- PASS: TestDownloadOnly/containerd/DeleteAlwaysSucceeds (0.28s)

                                                
                                    
x
+
TestDownloadOnlyKic (10.34s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20201211202851-6575 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:187: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20201211202851-6575 --force --alsologtostderr --driver=docker : (8.731093387s)
helpers_test.go:171: Cleaning up "download-docker-20201211202851-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20201211202851-6575
--- PASS: TestDownloadOnlyKic (10.34s)

                                                
                                    
x
+
TestOffline/group/docker (80.87s)

                                                
                                                
=== RUN   TestOffline/group/docker
=== PAUSE TestOffline/group/docker

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/docker

                                                
                                                
=== CONT  TestOffline/group/docker
aab_offline_test.go:54: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20201211202901-6575 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=docker 

                                                
                                                
=== CONT  TestOffline/group/docker
aab_offline_test.go:54: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20201211202901-6575 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=docker : (1m17.948673593s)
helpers_test.go:171: Cleaning up "offline-docker-20201211202901-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20201211202901-6575
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20201211202901-6575: (2.920414781s)
--- PASS: TestOffline/group/docker (80.87s)

                                                
                                    
x
+
TestOffline/group/crio (110.47s)

                                                
                                                
=== RUN   TestOffline/group/crio
=== PAUSE TestOffline/group/crio

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/crio

                                                
                                                
=== CONT  TestOffline/group/crio
aab_offline_test.go:54: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20201211202901-6575 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=docker 

                                                
                                                
=== CONT  TestOffline/group/crio
aab_offline_test.go:54: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20201211202901-6575 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=docker : (1m47.70029239s)
helpers_test.go:171: Cleaning up "offline-crio-20201211202901-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-20201211202901-6575
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20201211202901-6575: (2.766202405s)
--- PASS: TestOffline/group/crio (110.47s)

                                                
                                    
x
+
TestOffline/group/containerd (102.52s)

                                                
                                                
=== RUN   TestOffline/group/containerd
=== PAUSE TestOffline/group/containerd

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/containerd

                                                
                                                
=== CONT  TestOffline/group/containerd
aab_offline_test.go:54: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20201211202901-6575 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=docker 

                                                
                                                
=== CONT  TestOffline/group/containerd
aab_offline_test.go:54: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20201211202901-6575 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=docker : (1m39.493679126s)
helpers_test.go:171: Cleaning up "offline-containerd-20201211202901-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20201211202901-6575
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20201211202901-6575: (3.023432556s)
--- PASS: TestOffline/group/containerd (102.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:199: registry stabilized in 20.583831ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:201: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:333: "registry-tlkmj" [4d4ceb86-48ce-42f7-8574-72d52ced0cd3] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:201: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008621231s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:204: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:333: "registry-proxy-4xp5p" [d1c43a63-6352-41dd-a309-25b870de4e3c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:204: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008635889s
addons_test.go:209: (dbg) Run:  kubectl --context addons-20201211203051-6575 delete po -l run=registry-test --now
addons_test.go:214: (dbg) Run:  kubectl --context addons-20201211203051-6575 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:214: (dbg) Done: kubectl --context addons-20201211203051-6575 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.845864997s)
addons_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201211203051-6575 ip
2020/12/11 20:33:19 [DEBUG] GET http://192.168.49.176:5000
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201211203051-6575 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (23.68s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:126: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "kube-system" ...
helpers_test.go:333: "ingress-nginx-admission-create-cw6pp" [d24791cd-2d9f-4ef7-93e3-a2d228671943] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:126: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 76.674697ms
addons_test.go:131: (dbg) Run:  kubectl --context addons-20201211203051-6575 replace --force -f testdata/nginx-ing.yaml
addons_test.go:136: kubectl --context addons-20201211203051-6575 replace --force -f testdata/nginx-ing.yaml: unexpected stderr: Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
(may be temproary)
addons_test.go:145: (dbg) Run:  kubectl --context addons-20201211203051-6575 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:150: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:333: "nginx" [f4c296a3-ea82-45b2-a75c-71e451882b98] Pending
helpers_test.go:333: "nginx" [f4c296a3-ea82-45b2-a75c-71e451882b98] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:333: "nginx" [f4c296a3-ea82-45b2-a75c-71e451882b98] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:150: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.007490648s
addons_test.go:160: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201211203051-6575 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201211203051-6575 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:181: (dbg) Done: out/minikube-linux-amd64 -p addons-20201211203051-6575 addons disable ingress --alsologtostderr -v=1: (2.406426977s)
--- PASS: TestAddons/parallel/Ingress (17.53s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (16.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:275: metrics-server stabilized in 21.251837ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:277: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:333: "metrics-server-d9b576748-bkd6r" [519d76f3-2f20-48c1-805c-2c7f150ae3aa] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:277: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007643353s
addons_test.go:283: (dbg) Run:  kubectl --context addons-20201211203051-6575 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:283: (dbg) Non-zero exit: kubectl --context addons-20201211203051-6575 top pods -n kube-system: exit status 1 (91.413052ms)

                                                
                                                
** stderr ** 
	error: metrics not available yet

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:283: (dbg) Run:  kubectl --context addons-20201211203051-6575 top pods -n kube-system
addons_test.go:283: (dbg) Non-zero exit: kubectl --context addons-20201211203051-6575 top pods -n kube-system: exit status 1 (676.351442ms)

                                                
                                                
** stderr ** 
	error: metrics not available yet

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:283: (dbg) Run:  kubectl --context addons-20201211203051-6575 top pods -n kube-system
addons_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201211203051-6575 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:301: (dbg) Done: out/minikube-linux-amd64 -p addons-20201211203051-6575 addons disable metrics-server --alsologtostderr -v=1: (1.115892228s)
--- PASS: TestAddons/parallel/MetricsServer (16.84s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.89s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:319: tiller-deploy stabilized in 2.606883ms
addons_test.go:321: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:333: "tiller-deploy-565984b594-5rnch" [d2ef7f30-b1ee-4c7b-878b-e08d4f4e94c9] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:321: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009339648s
addons_test.go:336: (dbg) Run:  kubectl --context addons-20201211203051-6575 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:336: (dbg) Done: kubectl --context addons-20201211203051-6575 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (5.404053192s)
addons_test.go:341: kubectl --context addons-20201211203051-6575 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201211203051-6575 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:434: csi-hostpath-driver pods stabilized in 23.735489ms
addons_test.go:437: (dbg) Run:  kubectl --context addons-20201211203051-6575 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:442: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:383: (dbg) Run:  kubectl --context addons-20201211203051-6575 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:447: (dbg) Run:  kubectl --context addons-20201211203051-6575 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:452: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:333: "task-pv-pod" [cfa7141e-8c22-4348-9aa8-85d9621f35ae] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:333: "task-pv-pod" [cfa7141e-8c22-4348-9aa8-85d9621f35ae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:333: "task-pv-pod" [cfa7141e-8c22-4348-9aa8-85d9621f35ae] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:452: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 25.005615006s
addons_test.go:457: (dbg) Run:  kubectl --context addons-20201211203051-6575 create -f testdata/csi-hostpath-driver/snapshotclass.yaml
addons_test.go:463: (dbg) Run:  kubectl --context addons-20201211203051-6575 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:468: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201211203051-6575 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:416: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201211203051-6575 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:473: (dbg) Run:  kubectl --context addons-20201211203051-6575 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:473: (dbg) Done: kubectl --context addons-20201211203051-6575 delete pod task-pv-pod: (5.221404047s)
addons_test.go:479: (dbg) Run:  kubectl --context addons-20201211203051-6575 delete pvc hpvc
addons_test.go:485: (dbg) Run:  kubectl --context addons-20201211203051-6575 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:490: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:383: (dbg) Run:  kubectl --context addons-20201211203051-6575 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:495: (dbg) Run:  kubectl --context addons-20201211203051-6575 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:500: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:333: "task-pv-pod-restore" [eec9ab75-4486-42c5-ba8a-421c8a80c19a] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:333: "task-pv-pod-restore" [eec9ab75-4486-42c5-ba8a-421c8a80c19a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:333: "task-pv-pod-restore" [eec9ab75-4486-42c5-ba8a-421c8a80c19a] Running
addons_test.go:500: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.006174367s
addons_test.go:505: (dbg) Run:  kubectl --context addons-20201211203051-6575 delete pod task-pv-pod-restore
addons_test.go:505: (dbg) Done: kubectl --context addons-20201211203051-6575 delete pod task-pv-pod-restore: (8.024555681s)
addons_test.go:509: (dbg) Run:  kubectl --context addons-20201211203051-6575 delete pvc hpvc-restore
addons_test.go:513: (dbg) Run:  kubectl --context addons-20201211203051-6575 delete volumesnapshot new-snapshot-demo
addons_test.go:517: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201211203051-6575 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:517: (dbg) Done: out/minikube-linux-amd64 -p addons-20201211203051-6575 addons disable csi-hostpath-driver --alsologtostderr -v=1: (5.233733621s)
addons_test.go:521: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201211203051-6575 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.77s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (23.46s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:531: (dbg) Run:  kubectl --context addons-20201211203051-6575 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:537: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [467c6bae-7f15-46b5-b9f9-87ef2d8ddb62] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:333: "busybox" [467c6bae-7f15-46b5-b9f9-87ef2d8ddb62] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:333: "busybox" [467c6bae-7f15-46b5-b9f9-87ef2d8ddb62] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:537: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 10.009848638s
addons_test.go:543: (dbg) Run:  kubectl --context addons-20201211203051-6575 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:555: (dbg) Run:  kubectl --context addons-20201211203051-6575 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:578: (dbg) Run:  kubectl --context addons-20201211203051-6575 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:589: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201211203051-6575 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20201211203051-6575 addons disable gcp-auth --alsologtostderr -v=1: (12.007674822s)
--- PASS: TestAddons/parallel/GCPAuth (23.46s)

                                                
                                    
x
+
TestCertOptions (54.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20201211204619-6575 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker 

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20201211204619-6575 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker : (50.13978663s)
cert_options_test.go:57: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20201211204619-6575 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:72: (dbg) Run:  kubectl --context cert-options-20201211204619-6575 config view
helpers_test.go:171: Cleaning up "cert-options-20201211204619-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20201211204619-6575

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20201211204619-6575: (3.676523863s)
--- PASS: TestCertOptions (54.27s)

                                                
                                    
x
+
TestDockerFlags (55.88s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20201211204802-6575 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20201211204802-6575 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (51.831459327s)
docker_test.go:46: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20201211204802-6575 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:57: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20201211204802-6575 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:171: Cleaning up "docker-flags-20201211204802-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20201211204802-6575

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20201211204802-6575: (3.209112983s)
--- PASS: TestDockerFlags (55.88s)

                                                
                                    
x
+
TestForceSystemdFlag (48.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20201211204713-6575 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20201211204713-6575 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=docker : (44.466960989s)
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20201211204713-6575 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:171: Cleaning up "force-systemd-flag-20201211204713-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20201211204713-6575
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20201211204713-6575: (3.779504185s)
--- PASS: TestForceSystemdFlag (48.77s)

                                                
                                    
x
+
TestForceSystemdEnv (52.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20201211204757-6575 --memory=1800 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20201211204757-6575 --memory=1800 --alsologtostderr -v=5 --driver=docker : (49.215743047s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20201211204757-6575 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:171: Cleaning up "force-systemd-env-20201211204757-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20201211204757-6575
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20201211204757-6575: (2.974712296s)
--- PASS: TestForceSystemdEnv (52.68s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.58s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (6.58s)

                                                
                                    
x
+
TestErrorSpam (53.91s)

                                                
                                                
=== RUN   TestErrorSpam
=== PAUSE TestErrorSpam

                                                
                                                

                                                
                                                
=== CONT  TestErrorSpam
error_spam_test.go:62: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20201211204619-6575 -n=1 --memory=2250 --wait=false --driver=docker 
> docker-machine-driver-kvm2....: 65 B / 65 B [----------] 100.00% ? p/s 0s    > docker-machine-driver-kvm2: 287.55 KiB / 48.57 MiB [>_____] 0.58% ? p/s ?    > docker-machine-driver-kvm2: 2.37 MiB / 48.57 MiB [>_______] 4.89% ? p/s ?    > docker-machine-driver-kvm2: 10.69 MiB / 48.57 MiB [->____] 22.00% ? p/s ?    > docker-machine-driver-kvm2: 19.28 MiB / 48.57 MiB  39.69% 31.66 MiB p/s E    > docker-machine-driver-kvm2: 28.22 MiB / 48.57 MiB  58.09% 31.66 MiB p/s E    > docker-machine-driver-kvm2: 37.17 MiB / 48.57 MiB  76.53% 31.66 MiB p/s E    > docker-machine-driver-kvm2: 45.44 MiB / 48.57 MiB  93.54% 32.43 MiB p/s E    > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB  100.00% 37.69 MiB p/s     > docker-machine-driver-kvm2....: 65 B / 65 B [----------] 100.00% ? p/s 0s    > docker-machine-driver-kvm2: 271.55 KiB / 48.57 MiB [>_____] 0.55% ? p/s ?    > docker-machine-driver-kvm2: 2.38 MiB / 48.57 MiB [>_______] 4.90% ? p/s ?    > docker-machine-driver-kvm2: 10.22 MiB / 48.57 MiB [->____] 21.04% ? p/
s ?    > docker-machine-driver-kvm2: 19.64 MiB / 48.57 MiB  40.44% 32.30 MiB p/s E    > docker-machine-driver-kvm2: 28.63 MiB / 48.57 MiB  58.94% 32.30 MiB p/s E    > docker-machine-driver-kvm2: 37.60 MiB / 48.57 MiB  77.40% 32.30 MiB p/s E    > docker-machine-driver-kvm2: 46.55 MiB / 48.57 MiB  95.83% 33.11 MiB p/s E    > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB  100.00% 39.13 MiB p/s === CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestErrorSpam
error_spam_test.go:62: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20201211204619-6575 -n=1 --memory=2250 --wait=false --driver=docker : (50.123124645s)
helpers_test.go:171: Cleaning up "nospam-20201211204619-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p nospam-20201211204619-6575

                                                
                                                
=== CONT  TestErrorSpam
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p nospam-20201211204619-6575: (3.783709727s)
--- PASS: TestErrorSpam (53.91s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1011: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube/files/etc/test/nested/copy/6575/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20201211203409-6575 --memory=4000 --apiserver-port=8441 --wait=true --driver=docker 
functional_test.go:231: (dbg) Done: out/minikube-linux-amd64 start -p functional-20201211203409-6575 --memory=4000 --apiserver-port=8441 --wait=true --driver=docker : (46.320666314s)
--- PASS: TestFunctional/serial/StartWithProxy (46.32s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:263: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20201211203409-6575 --alsologtostderr -v=8
functional_test.go:263: (dbg) Done: out/minikube-linux-amd64 start -p functional-20201211203409-6575 --alsologtostderr -v=8: (3.888792752s)
functional_test.go:267: soft start took 3.889510999s for "functional-20201211203409-6575" cluster.
--- PASS: TestFunctional/serial/SoftStart (3.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:284: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:297: (dbg) Run:  kubectl --context functional-20201211203409-6575 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:529: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 cache add k8s.gcr.io/pause:3.1
functional_test.go:529: (dbg) Done: out/minikube-linux-amd64 -p functional-20201211203409-6575 cache add k8s.gcr.io/pause:3.1: (1.382579161s)
functional_test.go:529: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 cache add k8s.gcr.io/pause:3.3
functional_test.go:529: (dbg) Done: out/minikube-linux-amd64 -p functional-20201211203409-6575 cache add k8s.gcr.io/pause:3.3: (1.508139049s)
functional_test.go:529: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 cache add k8s.gcr.io/pause:latest
functional_test.go:529: (dbg) Done: out/minikube-linux-amd64 -p functional-20201211203409-6575 cache add k8s.gcr.io/pause:latest: (1.329977743s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:558: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20201211203409-6575 /tmp/functional-20201211203409-6575115257903
functional_test.go:563: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 cache add minikube-local-cache-test:functional-20201211203409-6575
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:570: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:577: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:590: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:609: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:609: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (306.490257ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 cache reload
functional_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p functional-20201211203409-6575 cache reload: (1.11301788s)
functional_test.go:619: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:628: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:628: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 kubectl -- --context functional-20201211203409-6575 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.36s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:334: (dbg) Run:  out/kubectl --context functional-20201211203409-6575 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.36s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (24.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:348: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20201211203409-6575 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision
functional_test.go:348: (dbg) Done: out/minikube-linux-amd64 start -p functional-20201211203409-6575 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision: (24.374795724s)
functional_test.go:352: restart took 24.374916815s for "functional-20201211203409-6575" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (24.38s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutputError (0.5s)

                                                
                                                
=== RUN   TestJSONOutputError
json_output_test.go:134: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20201211203634-6575 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:134: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20201211203634-6575 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (109.131985ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20201211203634-6575] minikube v1.15.1 on Debian 9.13","name":"Initial Minikube Setup","totalsteps":"13"},"datacontenttype":"application/json","id":"7bfe6ce7-55ba-400c-bbdf-d07958544d09","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/kubeconfig"},"datacontenttype":"application/json","id":"bbc98b84-6a7a-4719-9108-07d47b5b2163","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"bca6ee7a-a76c-4f4d-83f7-45ed12335ffe","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube"},"datacontenttype":"application/json","id":"2bb480bc-5e75-4a0b-b297-445c3f9880aa","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=9933"},"datacontenttype":"application/json","id":"91e0422f-0010-4f87-959d-e789ddc9f2fa","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"f45ff210-aa33-49da-b6a5-bab73160cfde","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:171: Cleaning up "json-output-error-20201211203634-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20201211203634-6575
--- PASS: TestJSONOutputError (0.50s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:68: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20201211203635-6575 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:68: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20201211203635-6575 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m17.434837382s)
multinode_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20201211203635-6575 -v 3 --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20201211203635-6575 -v 3 --alsologtostderr: (16.977722415s)
multinode_test.go:98: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 node stop m03
multinode_test.go:114: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201211203635-6575 node stop m03: (1.42950288s)
multinode_test.go:120: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 status
multinode_test.go:120: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201211203635-6575 status: exit status 7 (662.283145ms)

                                                
                                                
-- stdout --
	multinode-20201211203635-6575
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	timeToStop: Nonexistent
	
	multinode-20201211203635-6575-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20201211203635-6575-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 status --alsologtostderr
multinode_test.go:127: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201211203635-6575 status --alsologtostderr: exit status 7 (673.060023ms)

                                                
                                                
-- stdout --
	multinode-20201211203635-6575
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	timeToStop: Nonexistent
	
	multinode-20201211203635-6575-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20201211203635-6575-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 20:38:13.553879   69433 out.go:217] Setting OutFile to fd 1 ...
	I1211 20:38:13.554147   69433 out.go:264] TERM=,COLORTERM=, which probably does not support color
	I1211 20:38:13.554163   69433 out.go:230] Setting ErrFile to fd 2...
	I1211 20:38:13.554168   69433 out.go:264] TERM=,COLORTERM=, which probably does not support color
	I1211 20:38:13.554292   69433 root.go:279] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube/bin
	I1211 20:38:13.554477   69433 out.go:224] Setting JSON to false
	I1211 20:38:13.554496   69433 mustload.go:66] Loading cluster: multinode-20201211203635-6575
	I1211 20:38:13.554807   69433 status.go:241] checking status of multinode-20201211203635-6575 ...
	I1211 20:38:13.555385   69433 cli_runner.go:111] Run: docker container inspect multinode-20201211203635-6575 --format={{.State.Status}}
	I1211 20:38:13.607907   69433 status.go:317] multinode-20201211203635-6575 host status = "Running" (err=<nil>)
	I1211 20:38:13.607965   69433 host.go:66] Checking if "multinode-20201211203635-6575" exists ...
	I1211 20:38:13.608362   69433 cli_runner.go:111] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20201211203635-6575
	I1211 20:38:13.659213   69433 host.go:66] Checking if "multinode-20201211203635-6575" exists ...
	I1211 20:38:13.659526   69433 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 20:38:13.659569   69433 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20201211203635-6575
	I1211 20:38:13.706821   69433 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32795 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube/machines/multinode-20201211203635-6575/id_rsa Username:docker}
	I1211 20:38:13.808785   69433 ssh_runner.go:148] Run: systemctl --version
	I1211 20:38:13.813357   69433 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
	I1211 20:38:13.825154   69433 kubeconfig.go:93] found "multinode-20201211203635-6575" server: "https://192.168.59.176:8443"
	I1211 20:38:13.825195   69433 api_server.go:146] Checking apiserver status ...
	I1211 20:38:13.825226   69433 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 20:38:13.846769   69433 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/1863/cgroup
	I1211 20:38:13.855494   69433 api_server.go:162] apiserver freezer: "6:freezer:/docker/ec1af16f83d48d7ac0aa622c1ab78d6c0280286eb556f425b846d292dc4abde7/kubepods/burstable/pod30fb9afba4c39ffe9c14831adf8aec3e/0ca86ba7769a10e21b9e58d0a78c3a502269a9905d7943602c62ca8ef47354ff"
	I1211 20:38:13.855553   69433 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/ec1af16f83d48d7ac0aa622c1ab78d6c0280286eb556f425b846d292dc4abde7/kubepods/burstable/pod30fb9afba4c39ffe9c14831adf8aec3e/0ca86ba7769a10e21b9e58d0a78c3a502269a9905d7943602c62ca8ef47354ff/freezer.state
	I1211 20:38:13.862889   69433 api_server.go:184] freezer state: "THAWED"
	I1211 20:38:13.862969   69433 api_server.go:221] Checking apiserver healthz at https://192.168.59.176:8443/healthz ...
	I1211 20:38:13.870057   69433 api_server.go:241] https://192.168.59.176:8443/healthz returned 200:
	ok
	I1211 20:38:13.870141   69433 status.go:395] multinode-20201211203635-6575 apiserver status = Running (err=<nil>)
	I1211 20:38:13.870160   69433 status.go:243] multinode-20201211203635-6575 status: &{Name:multinode-20201211203635-6575 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop:Nonexistent}
	I1211 20:38:13.870181   69433 status.go:241] checking status of multinode-20201211203635-6575-m02 ...
	I1211 20:38:13.870581   69433 cli_runner.go:111] Run: docker container inspect multinode-20201211203635-6575-m02 --format={{.State.Status}}
	I1211 20:38:13.919071   69433 status.go:317] multinode-20201211203635-6575-m02 host status = "Running" (err=<nil>)
	I1211 20:38:13.919104   69433 host.go:66] Checking if "multinode-20201211203635-6575-m02" exists ...
	I1211 20:38:13.919442   69433 cli_runner.go:111] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20201211203635-6575-m02
	I1211 20:38:13.969485   69433 host.go:66] Checking if "multinode-20201211203635-6575-m02" exists ...
	I1211 20:38:13.969904   69433 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1211 20:38:13.969963   69433 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20201211203635-6575-m02
	I1211 20:38:14.017336   69433 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32799 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube/machines/multinode-20201211203635-6575-m02/id_rsa Username:docker}
	I1211 20:38:14.108144   69433 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
	I1211 20:38:14.119125   69433 status.go:243] multinode-20201211203635-6575-m02 status: &{Name:multinode-20201211203635-6575-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop:Nonexistent}
	I1211 20:38:14.119175   69433 status.go:241] checking status of multinode-20201211203635-6575-m03 ...
	I1211 20:38:14.119461   69433 cli_runner.go:111] Run: docker container inspect multinode-20201211203635-6575-m03 --format={{.State.Status}}
	I1211 20:38:14.167549   69433 status.go:317] multinode-20201211203635-6575-m03 host status = "Stopped" (err=<nil>)
	I1211 20:38:14.167576   69433 status.go:330] host is not running, skipping remaining checks
	I1211 20:38:14.167583   69433 status.go:243] multinode-20201211203635-6575-m03 status: &{Name:multinode-20201211203635-6575-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop:Nonexistent}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.77s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:147: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 node start m03 --alsologtostderr
multinode_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201211203635-6575 node start m03 --alsologtostderr: (25.858849305s)
multinode_test.go:164: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 status
multinode_test.go:178: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 node delete m03
multinode_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201211203635-6575 node delete m03: (5.16901538s)
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 status --alsologtostderr
multinode_test.go:285: (dbg) Run:  docker volume ls
multinode_test.go:295: (dbg) Run:  kubectl get nodes
multinode_test.go:303: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (12.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:186: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 stop
multinode_test.go:186: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201211203635-6575 stop: (12.373508764s)
multinode_test.go:192: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 status
multinode_test.go:192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201211203635-6575 status: exit status 7 (159.790736ms)

                                                
                                                
-- stdout --
	multinode-20201211203635-6575
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	timeToStop: Nonexistent
	
	multinode-20201211203635-6575-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 status --alsologtostderr
multinode_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201211203635-6575 status --alsologtostderr: exit status 7 (157.334216ms)

                                                
                                                
-- stdout --
	multinode-20201211203635-6575
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	timeToStop: Nonexistent
	
	multinode-20201211203635-6575-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 20:38:59.845401   74384 out.go:217] Setting OutFile to fd 1 ...
	I1211 20:38:59.845654   74384 out.go:264] TERM=,COLORTERM=, which probably does not support color
	I1211 20:38:59.845667   74384 out.go:230] Setting ErrFile to fd 2...
	I1211 20:38:59.845671   74384 out.go:264] TERM=,COLORTERM=, which probably does not support color
	I1211 20:38:59.845767   74384 root.go:279] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube/bin
	I1211 20:38:59.845939   74384 out.go:224] Setting JSON to false
	I1211 20:38:59.845956   74384 mustload.go:66] Loading cluster: multinode-20201211203635-6575
	I1211 20:38:59.846224   74384 status.go:241] checking status of multinode-20201211203635-6575 ...
	I1211 20:38:59.846672   74384 cli_runner.go:111] Run: docker container inspect multinode-20201211203635-6575 --format={{.State.Status}}
	I1211 20:38:59.894852   74384 status.go:317] multinode-20201211203635-6575 host status = "Stopped" (err=<nil>)
	I1211 20:38:59.894885   74384 status.go:330] host is not running, skipping remaining checks
	I1211 20:38:59.894893   74384 status.go:243] multinode-20201211203635-6575 status: &{Name:multinode-20201211203635-6575 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop:Nonexistent}
	I1211 20:38:59.894927   74384 status.go:241] checking status of multinode-20201211203635-6575-m02 ...
	I1211 20:38:59.895283   74384 cli_runner.go:111] Run: docker container inspect multinode-20201211203635-6575-m02 --format={{.State.Status}}
	I1211 20:38:59.942755   74384 status.go:317] multinode-20201211203635-6575-m02 host status = "Stopped" (err=<nil>)
	I1211 20:38:59.942793   74384 status.go:330] host is not running, skipping remaining checks
	I1211 20:38:59.942804   74384 status.go:243] multinode-20201211203635-6575-m02 status: &{Name:multinode-20201211203635-6575-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop:Nonexistent}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (12.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:215: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20201211203635-6575 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:225: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20201211203635-6575 --wait=true -v=8 --alsologtostderr --driver=docker : (1m26.777640714s)
multinode_test.go:231: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201211203635-6575 status --alsologtostderr
multinode_test.go:245: (dbg) Run:  kubectl get nodes
multinode_test.go:253: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.60s)

                                                
                                    
x
+
TestPreload (227.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20201211204032-6575 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20201211204032-6575 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: (1m2.422225306s)
preload_test.go:50: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20201211204032-6575 -- docker pull busybox
preload_test.go:50: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20201211204032-6575 -- docker pull busybox: (2.5802062s)
preload_test.go:60: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20201211204032-6575 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3
preload_test.go:60: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20201211204032-6575 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3: (2m38.774954877s)
preload_test.go:64: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20201211204032-6575 -- docker images
helpers_test.go:171: Cleaning up "test-preload-20201211204032-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20201211204032-6575
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20201211204032-6575: (2.889783982s)
--- PASS: TestPreload (227.02s)

                                                
                                    
x
+
TestScheduledStopUnix (62.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:124: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20201211204419-6575 --memory=1900 --driver=docker 
scheduled_stop_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20201211204419-6575 --memory=1900 --driver=docker : (27.699082853s)
scheduled_stop_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20201211204419-6575 --schedule 5m
scheduled_stop_test.go:187: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575
scheduled_stop_test.go:165: signal error was:  <nil>
scheduled_stop_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20201211204419-6575 --schedule 8s
scheduled_stop_test.go:165: signal error was:  os: process already finished
scheduled_stop_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20201211204419-6575 --cancel-scheduled
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575
scheduled_stop_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20201211204419-6575 --schedule 5s
scheduled_stop_test.go:165: signal error was:  os: process already finished
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575: exit status 3 (2.498069993s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1211 20:45:08.970908   99121 status.go:363] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43200->127.0.0.1:32823: read: connection reset by peer
	E1211 20:45:08.971311   99121 status.go:235] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43200->127.0.0.1:32823: read: connection reset by peer

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: status error: exit status 3 (may be ok)
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575: exit status 3 (2.496558407s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1211 20:45:16.066381   99447 status.go:363] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43268->127.0.0.1:32823: read: connection reset by peer
	E1211 20:45:16.066758   99447 status.go:235] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43268->127.0.0.1:32823: read: connection reset by peer

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: status error: exit status 3 (may be ok)
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575: exit status 7 (105.320461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:172: status error: exit status 7 (may be ok)
scheduled_stop_test.go:172: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575
scheduled_stop_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20201211204419-6575 -n scheduled-stop-20201211204419-6575: exit status 7 (104.048426ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
scheduled_stop_test.go:172: status error: exit status 7 (may be ok)
helpers_test.go:171: Cleaning up "scheduled-stop-20201211204419-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20201211204419-6575
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20201211204419-6575: (2.249254728s)
--- PASS: TestScheduledStopUnix (62.79s)

                                                
                                    
x
+
TestInsufficientStorage (11.88s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20201211204607-6575 --memory=1900 --output=json --wait=true --driver=docker 
status_test.go:49: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20201211204607-6575 --memory=1900 --output=json --wait=true --driver=docker : exit status 26 (8.866309766s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20201211204607-6575] minikube v1.15.1 on Debian 9.13","name":"Initial Minikube Setup","totalsteps":"13"},"datacontenttype":"application/json","id":"2ab4774b-059d-49ea-b784-072ce02cfbba","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/kubeconfig"},"datacontenttype":"application/json","id":"926488bf-0b7a-4a6f-bf1b-148a6b9cebfa","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"f34f8cb2-69cf-4899-b669-a49233a66bac","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube"},"datacontenttype":"application/json","id":"d8e6dd5a-8762-44f3-995c-e557c082b90d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=9933"},"datacontenttype":"application/json","id":"1421605f-975c-4b3c-ad2e-f1915b1bd394","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"7742dc73-f857-4a35-bf64-41f12f711615","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"13"},"datacontenttype":"application/json","id":"df1fb39c-cb85-474e-8a57-fe2dd8f7feda","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Requested memory allocation (1900MB) is less than the recommended minimum 1907MB. Deployments may fail."},"datacontenttype":"application/json","id":"d9c22fa1-4128-43b2-b062-a4f958d145c1","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20201211204607-6575 in cluster insufficient-storage-20201211204607-6575","name":"Starting Node","totalsteps":"13"},"datacontenttype":"application/json","id":"fee7a214-c8cc-444c-af38-b2bf03ea8511","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"7","message":"Creating docker container (CPUs=2, Memory=1900MB) ...","name":"Creating Container","totalsteps":"13"},"datacontenttype":"application/json","id":"e135efb1-8a3e-4107-92b5-e6e27bac47a4","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try at least one of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused docker data\n\t\t\t2. Increase the amount of memory allocated to Docker for Desktop via \n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"694ef954-7490-46bc-a595-bdeb6a204b24","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:75: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20201211204607-6575 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20201211204607-6575 --output=json --layout=cluster: exit status 7 (316.423478ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20201211204607-6575","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=1900MB) ...","BinaryVersion":"v1.15.1","TimeToStop":"Nonexistent","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20201211204607-6575","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1211 20:46:16.915225  107533 status.go:389] kubeconfig endpoint: extract IP: "insufficient-storage-20201211204607-6575" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/kubeconfig

                                                
                                                
** /stderr **
status_test.go:75: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20201211204607-6575 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20201211204607-6575 --output=json --layout=cluster: exit status 7 (314.720278ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20201211204607-6575","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.15.1","TimeToStop":"Nonexistent","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20201211204607-6575","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1211 20:46:17.230800  107593 status.go:389] kubeconfig endpoint: extract IP: "insufficient-storage-20201211204607-6575" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/kubeconfig
	E1211 20:46:17.243963  107593 status.go:533] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube/profiles/insufficient-storage-20201211204607-6575/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:171: Cleaning up "insufficient-storage-20201211204607-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20201211204607-6575
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20201211204607-6575: (2.377974845s)
--- PASS: TestInsufficientStorage (11.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (98.28s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:95: (dbg) Run:  /tmp/minikube-v1.9.0.333785528.exe start -p running-upgrade-20201211204849-6575 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:95: (dbg) Done: /tmp/minikube-v1.9.0.333785528.exe start -p running-upgrade-20201211204849-6575 --memory=2200 --vm-driver=docker : (1m0.459230547s)
version_upgrade_test.go:105: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20201211204849-6575 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:105: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20201211204849-6575 --memory=2200 --alsologtostderr -v=1 --driver=docker : (33.865835201s)
helpers_test.go:171: Cleaning up "running-upgrade-20201211204849-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20201211204849-6575

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20201211204849-6575: (3.207262159s)
--- PASS: TestRunningBinaryUpgrade (98.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (92.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:142: (dbg) Run:  /tmp/minikube-v1.8.0.695455159.exe start -p stopped-upgrade-20201211204856-6575 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:142: (dbg) Done: /tmp/minikube-v1.8.0.695455159.exe start -p stopped-upgrade-20201211204856-6575 --memory=2200 --vm-driver=docker : (56.569535141s)
version_upgrade_test.go:151: (dbg) Run:  /tmp/minikube-v1.8.0.695455159.exe -p stopped-upgrade-20201211204856-6575 stop
version_upgrade_test.go:151: (dbg) Done: /tmp/minikube-v1.8.0.695455159.exe -p stopped-upgrade-20201211204856-6575 stop: (2.087705348s)
version_upgrade_test.go:157: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20201211204856-6575 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:157: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20201211204856-6575 --memory=2200 --alsologtostderr -v=1 --driver=docker : (30.115811339s)
helpers_test.go:171: Cleaning up "stopped-upgrade-20201211204856-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p stopped-upgrade-20201211204856-6575

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20201211204856-6575: (2.808009251s)
--- PASS: TestStoppedBinaryUpgrade (92.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (149.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:172: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201211204626-6575 --memory=2200 --kubernetes-version=v1.13.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:172: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20201211204626-6575 --memory=2200 --kubernetes-version=v1.13.0 --alsologtostderr -v=1 --driver=docker : (1m7.890082414s)
version_upgrade_test.go:177: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20201211204626-6575
version_upgrade_test.go:177: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20201211204626-6575: (11.154056896s)
version_upgrade_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20201211204626-6575 status --format={{.Host}}
version_upgrade_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20201211204626-6575 status --format={{.Host}}: exit status 7 (122.471955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:184: status error: exit status 7 (may be ok)
version_upgrade_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201211204626-6575 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20201211204626-6575 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker : (37.739832405s)
version_upgrade_test.go:198: (dbg) Run:  kubectl --context kubernetes-upgrade-20201211204626-6575 version --output=json
version_upgrade_test.go:217: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201211204626-6575 --memory=2200 --kubernetes-version=v1.13.0 --driver=docker 
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20201211204626-6575 --memory=2200 --kubernetes-version=v1.13.0 --driver=docker : exit status 106 (167.804233ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20201211204626-6575] minikube v1.15.1 on Debian 9.13
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube
	  - MINIKUBE_LOCATION=9933
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.20.0 cluster to v1.13.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.13.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20201211204626-6575
	    minikube start -p kubernetes-upgrade-20201211204626-6575 --kubernetes-version=v1.13.0
	    
	    2) Create a second cluster with Kubernetes 1.13.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20201211204626-65752 --kubernetes-version=v1.13.0
	    
	    3) Use the existing cluster at version Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20201211204626-6575 --kubernetes-version=v1.20.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:223: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201211204626-6575 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:225: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20201211204626-6575 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker : (28.362292505s)
helpers_test.go:171: Cleaning up "kubernetes-upgrade-20201211204626-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20201211204626-6575

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20201211204626-6575: (4.295736978s)
--- PASS: TestKubernetesUpgrade (149.82s)

                                                
                                    
x
+
TestMissingContainerUpgrade (350.07s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:259: (dbg) Run:  /tmp/minikube-v1.9.1.283877501.exe start -p missing-upgrade-20201211204713-6575 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:259: (dbg) Done: /tmp/minikube-v1.9.1.283877501.exe start -p missing-upgrade-20201211204713-6575 --memory=2200 --driver=docker : (1m5.84117443s)
version_upgrade_test.go:268: (dbg) Run:  docker stop missing-upgrade-20201211204713-6575
version_upgrade_test.go:268: (dbg) Done: docker stop missing-upgrade-20201211204713-6575: (1.984067001s)
version_upgrade_test.go:273: (dbg) Run:  docker rm missing-upgrade-20201211204713-6575
version_upgrade_test.go:279: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20201211204713-6575 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:279: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20201211204713-6575 --memory=2200 --alsologtostderr -v=1 --driver=docker : (4m36.730595552s)
helpers_test.go:171: Cleaning up "missing-upgrade-20201211204713-6575" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20201211204713-6575
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20201211204713-6575: (4.192227771s)
--- PASS: TestMissingContainerUpgrade (350.07s)

                                                
                                    
x
+
TestPause/serial/Start (73.48s)

                                                
                                                
=== RUN   TestPause/serial/Start

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20201211204619-6575 --memory=1800 --install-addons=false --wait=all --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p pause-20201211204619-6575 --memory=1800 --install-addons=false --wait=all --driver=docker : (1m13.47546755s)
--- PASS: TestPause/serial/Start (73.48s)

                                                
                                    
x
+
TestFunctional/parallel/ComponentHealth (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ComponentHealth
=== PAUSE TestFunctional/parallel/ComponentHealth

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ComponentHealth
functional_test.go:379: (dbg) Run:  kubectl --context functional-20201211203409-6575 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:391: etcd phase: Running
functional_test.go:391: control-plane phase: Running
functional_test.go:391: kube-apiserver phase: Running
functional_test.go:391: control-plane phase: Running
functional_test.go:391: kube-controller-manager phase: Running
functional_test.go:391: control-plane phase: Running
functional_test.go:391: kube-scheduler phase: Running
functional_test.go:391: control-plane phase: Running
--- PASS: TestFunctional/parallel/ComponentHealth (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:654: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20201211203409-6575 config get cpus: exit status 14 (62.53659ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 config set cpus 2
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 config get cpus
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 config unset cpus
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 config get cpus
functional_test.go:654: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20201211203409-6575 config get cpus: exit status 14 (66.44476ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:456: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url -p functional-20201211203409-6575 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:461: (dbg) stopping [out/minikube-linux-amd64 dashboard --url -p functional-20201211203409-6575 --alsologtostderr -v=1] ...
helpers_test.go:497: unable to kill pid 182781: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:501: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20201211203409-6575 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:501: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20201211203409-6575 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (315.29831ms)

                                                
                                                
-- stdout --
	* [functional-20201211203409-6575] minikube v1.15.1 on Debian 9.13
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube
	  - MINIKUBE_LOCATION=9933
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 20:50:29.055230  180511 out.go:217] Setting OutFile to fd 1 ...
	I1211 20:50:29.055435  180511 out.go:264] TERM=,COLORTERM=, which probably does not support color
	I1211 20:50:29.055453  180511 out.go:230] Setting ErrFile to fd 2...
	I1211 20:50:29.055458  180511 out.go:264] TERM=,COLORTERM=, which probably does not support color
	I1211 20:50:29.055572  180511 root.go:279] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube/bin
	I1211 20:50:29.055819  180511 out.go:224] Setting JSON to false
	I1211 20:50:29.109094  180511 start.go:104] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":1988,"bootTime":1607717841,"procs":263,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-14-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I1211 20:50:29.109917  180511 start.go:114] virtualization: kvm host
	I1211 20:50:29.113685  180511 out.go:119] * [functional-20201211203409-6575] minikube v1.15.1 on Debian 9.13
	I1211 20:50:29.115512  180511 out.go:119]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/kubeconfig
	I1211 20:50:29.117361  180511 out.go:119]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 20:50:29.119255  180511 out.go:119]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-9933-2701-f3be305abb7c609130b6957b2b63ae924113770f/.minikube
	I1211 20:50:29.125888  180511 out.go:119]   - MINIKUBE_LOCATION=9933
	I1211 20:50:29.128553  180511 driver.go:303] Setting default libvirt URI to qemu:///system
	I1211 20:50:29.187208  180511 docker.go:117] docker version: linux-19.03.14
	I1211 20:50:29.187299  180511 cli_runner.go:111] Run: docker system info --format "{{json .}}"
	I1211 20:50:29.292574  180511 info.go:253] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2020-12-11 20:50:29.236389883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-14-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:31628288000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warning
s:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I1211 20:50:29.292669  180511 docker.go:147] overlay module found
	I1211 20:50:29.295295  180511 out.go:119] * Using the docker driver based on existing profile
	I1211 20:50:29.295329  180511 start.go:277] selected driver: docker
	I1211 20:50:29.295337  180511 start.go:695] validating driver "docker" against &{Name:functional-20201211203409-6575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:functional-20201211203409-6575 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.176 Port:8441 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[ambassador:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true ap
ps_running:true default_sa:true kubelet:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false}
	I1211 20:50:29.295497  180511 start.go:706] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
	I1211 20:50:29.298196  180511 out.go:119] 
	W1211 20:50:29.298349  180511 out.go:177] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 953MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 953MB
	I1211 20:50:29.302331  180511 out.go:119] 

                                                
                                                
** /stderr **
functional_test.go:512: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20201211203409-6575 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 status
functional_test.go:417: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/LogsCmd (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsCmd
=== PAUSE TestFunctional/parallel/LogsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:672: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 logs

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:672: (dbg) Done: out/minikube-linux-amd64 -p functional-20201211203409-6575 logs: (3.256504497s)
--- PASS: TestFunctional/parallel/LogsCmd (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
fn_mount_cmd_test.go:72: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20201211203409-6575 /tmp/mounttest826018218:/mount-9p --alsologtostderr -v=1]
fn_mount_cmd_test.go:106: wrote "test-1607719845624038255" to /tmp/mounttest826018218/created-by-test
fn_mount_cmd_test.go:106: wrote "test-1607719845624038255" to /tmp/mounttest826018218/created-by-test-removed-by-pod
fn_mount_cmd_test.go:106: wrote "test-1607719845624038255" to /tmp/mounttest826018218/test-1607719845624038255
fn_mount_cmd_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh "findmnt -T /mount-9p | grep 9p"
fn_mount_cmd_test.go:114: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (351.923945ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
fn_mount_cmd_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh "findmnt -T /mount-9p | grep 9p"
fn_mount_cmd_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
fn_mount_cmd_test.go:132: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 11 20:50 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 11 20:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 11 20:50 test-1607719845624038255
fn_mount_cmd_test.go:136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh cat /mount-9p/test-1607719845624038255
fn_mount_cmd_test.go:147: (dbg) Run:  kubectl --context functional-20201211203409-6575 replace --force -f testdata/busybox-mount-test.yaml
fn_mount_cmd_test.go:152: (dbg) TestFunctional/parallel/MountCmd: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:333: "busybox-mount" [797579a3-2735-47b7-a6f0-5baa676bf949] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
helpers_test.go:333: "busybox-mount" [797579a3-2735-47b7-a6f0-5baa676bf949] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
helpers_test.go:333: "busybox-mount" [797579a3-2735-47b7-a6f0-5baa676bf949] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
fn_mount_cmd_test.go:152: (dbg) TestFunctional/parallel/MountCmd: integration-test=busybox-mount healthy within 5.015788756s
fn_mount_cmd_test.go:168: (dbg) Run:  kubectl --context functional-20201211203409-6575 logs busybox-mount
fn_mount_cmd_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh stat /mount-9p/created-by-test
fn_mount_cmd_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh stat /mount-9p/created-by-pod
fn_mount_cmd_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh "sudo umount -f /mount-9p"
fn_mount_cmd_test.go:93: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20201211203409-6575 /tmp/mounttest826018218:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (18.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:789: (dbg) Run:  kubectl --context functional-20201211203409-6575 create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
functional_test.go:793: (dbg) Run:  kubectl --context functional-20201211203409-6575 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:798: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:333: "hello-node-7567d9fdc9-szst5" [34f2b25a-a595-4289-a5c8-f8298beefdec] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:333: "hello-node-7567d9fdc9-szst5" [34f2b25a-a595-4289-a5c8-f8298beefdec] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:798: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 16.006457058s
functional_test.go:802: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:802: (dbg) Done: out/minikube-linux-amd64 -p functional-20201211203409-6575 service list: (1.391787856s)
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 service --namespace=default --https --url hello-node
2020/12/11 20:50:48 [DEBUG] GET http://127.0.0.1:43167/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:824: found endpoint: https://192.168.49.176:30423
functional_test.go:835: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:844: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 service hello-node --url
functional_test.go:850: found endpoint for hello-node: http://192.168.49.176:30423
functional_test.go:861: Attempting to fetch http://192.168.49.176:30423 ...
functional_test.go:880: http://192.168.49.176:30423: success! body:
CLIENT VALUES:
client_address=172.17.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://192.168.49.176:8080/

                                                
                                                
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

                                                
                                                
HEADERS RECEIVED:
accept-encoding=gzip
host=192.168.49.176:30423
user-agent=Go-http-client/1.1
BODY:
-no body in request-
--- PASS: TestFunctional/parallel/ServiceCmd (18.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:895: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 addons list
functional_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
fn_pvc_test.go:43: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:333: "storage-provisioner" [e1d080d7-161d-4224-bd76-64d93a54a5a1] Running
fn_pvc_test.go:43: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.153674098s
fn_pvc_test.go:48: (dbg) Run:  kubectl --context functional-20201211203409-6575 get storageclass -o=json
fn_pvc_test.go:68: (dbg) Run:  kubectl --context functional-20201211203409-6575 apply -f testdata/storage-provisioner/pvc.yaml
fn_pvc_test.go:68: (dbg) Done: kubectl --context functional-20201211203409-6575 apply -f testdata/storage-provisioner/pvc.yaml: (5.74185453s)
fn_pvc_test.go:75: (dbg) Run:  kubectl --context functional-20201211203409-6575 get pvc myclaim -o=json
fn_pvc_test.go:124: (dbg) Run:  kubectl --context functional-20201211203409-6575 apply -f testdata/storage-provisioner/pod.yaml
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:333: "sp-pod" [c743c2cd-0c48-4f21-a6a8-042853473535] Pending
helpers_test.go:333: "sp-pod" [c743c2cd-0c48-4f21-a6a8-042853473535] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:333: "sp-pod" [c743c2cd-0c48-4f21-a6a8-042853473535] Running
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.01362938s
fn_pvc_test.go:99: (dbg) Run:  kubectl --context functional-20201211203409-6575 exec sp-pod -- touch /tmp/mount/foo
fn_pvc_test.go:105: (dbg) Run:  kubectl --context functional-20201211203409-6575 delete -f testdata/storage-provisioner/pod.yaml
fn_pvc_test.go:105: (dbg) Done: kubectl --context functional-20201211203409-6575 delete -f testdata/storage-provisioner/pod.yaml: (7.68955414s)
fn_pvc_test.go:124: (dbg) Run:  kubectl --context functional-20201211203409-6575 apply -f testdata/storage-provisioner/pod.yaml
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:333: "sp-pod" [060a6c4d-192c-4e3e-831b-21a4583783ad] Pending
helpers_test.go:333: "sp-pod" [060a6c4d-192c-4e3e-831b-21a4583783ad] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:333: "sp-pod" [060a6c4d-192c-4e3e-831b-21a4583783ad] Running
fn_pvc_test.go:129: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0086428s
fn_pvc_test.go:113: (dbg) Run:  kubectl --context functional-20201211203409-6575 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:928: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:945: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:963: (dbg) Run:  kubectl --context functional-20201211203409-6575 replace --force -f testdata/mysql.yaml
functional_test.go:968: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:333: "mysql-65c76b9ccb-djcgn" [20ab1ada-3979-42c4-8d96-aea242a8c498] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:333: "mysql-65c76b9ccb-djcgn" [20ab1ada-3979-42c4-8d96-aea242a8c498] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:968: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.016419082s
functional_test.go:975: (dbg) Run:  kubectl --context functional-20201211203409-6575 exec mysql-65c76b9ccb-djcgn -- mysql -ppassword -e "show databases;"
functional_test.go:975: (dbg) Non-zero exit: kubectl --context functional-20201211203409-6575 exec mysql-65c76b9ccb-djcgn -- mysql -ppassword -e "show databases;": exit status 1 (408.802646ms)

                                                
                                                
** stderr ** 
	Warning: Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:975: (dbg) Run:  kubectl --context functional-20201211203409-6575 exec mysql-65c76b9ccb-djcgn -- mysql -ppassword -e "show databases;"
functional_test.go:975: (dbg) Non-zero exit: kubectl --context functional-20201211203409-6575 exec mysql-65c76b9ccb-djcgn -- mysql -ppassword -e "show databases;": exit status 1 (336.537501ms)

                                                
                                                
** stderr ** 
	Warning: Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:975: (dbg) Run:  kubectl --context functional-20201211203409-6575 exec mysql-65c76b9ccb-djcgn -- mysql -ppassword -e "show databases;"
functional_test.go:975: (dbg) Non-zero exit: kubectl --context functional-20201211203409-6575 exec mysql-65c76b9ccb-djcgn -- mysql -ppassword -e "show databases;": exit status 1 (178.42621ms)

                                                
                                                
** stderr ** 
	Warning: Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:975: (dbg) Run:  kubectl --context functional-20201211203409-6575 exec mysql-65c76b9ccb-djcgn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1059: Checking for existence of /etc/test/nested/copy/6575/hosts within VM
functional_test.go:1060: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh "sudo cat /etc/test/nested/copy/6575/hosts"
functional_test.go:1065: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1100: Checking for existence of /etc/ssl/certs/6575.pem within VM
functional_test.go:1101: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh "sudo cat /etc/ssl/certs/6575.pem"
functional_test.go:1100: Checking for existence of /usr/share/ca-certificates/6575.pem within VM
functional_test.go:1101: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh "sudo cat /usr/share/ca-certificates/6575.pem"
functional_test.go:1100: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1101: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 ssh "sudo cat /etc/ssl/certs/51391683.0"
--- PASS: TestFunctional/parallel/CertSync (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:152: (dbg) Run:  kubectl --context functional-20201211203409-6575 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (15.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:87: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20201211204619-6575 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:87: (dbg) Done: out/minikube-linux-amd64 start -p pause-20201211204619-6575 --alsologtostderr -v=1 --driver=docker : (15.376221981s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (15.40s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:104: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20201211204619-6575 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:75: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20201211204619-6575 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20201211204619-6575 --output=json --layout=cluster: exit status 2 (391.672631ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20201211204619-6575","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.15.1","TimeToStop":"Nonexistent","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20201211204619-6575","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:114: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20201211204619-6575 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.01s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:104: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20201211204619-6575 --alsologtostderr -v=5
pause_test.go:104: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20201211204619-6575 --alsologtostderr -v=5: (1.014670336s)
--- PASS: TestPause/serial/PauseAgain (1.01s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20201211204619-6575 --alsologtostderr -v=5
pause_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20201211204619-6575 --alsologtostderr -v=5: (3.745121243s)
--- PASS: TestPause/serial/DeletePaused (3.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:134: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:134: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.281594263s)
pause_test.go:160: (dbg) Run:  docker ps -a
pause_test.go:165: (dbg) Run:  docker volume inspect pause-20201211204619-6575
pause_test.go:165: (dbg) Non-zero exit: docker volume inspect pause-20201211204619-6575: exit status 1 (54.640829ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20201211204619-6575

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20201211203409-6575 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
fn_tunnel_cmd_test.go:125: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20201211203409-6575 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
fn_tunnel_cmd_test.go:163: (dbg) Run:  kubectl --context functional-20201211203409-6575 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
fn_tunnel_cmd_test.go:228: tunnel at http://10.102.116.232 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
fn_tunnel_cmd_test.go:363: (dbg) stopping [out/minikube-linux-amd64 -p functional-20201211203409-6575 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:688: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:692: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:713: (dbg) Run:  out/minikube-linux-amd64 profile list
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:735: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20201211205049-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p auto-20201211205049-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --driver=docker : (1m0.231783198s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (59.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p false-20201211205049-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p false-20201211205049-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=false --driver=docker : (59.628040341s)
--- PASS: TestNetworkPlugins/group/false/Start (59.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (88.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20201211205054-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20201211205054-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=cilium --driver=docker : (1m28.411236052s)
--- PASS: TestNetworkPlugins/group/cilium/Start (88.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20201211205049-6575 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20201211205049-6575 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context false-20201211205049-6575 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-njvdw" [de298c20-e26d-4261-8997-dd731b84e78e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-njvdw" [de298c20-e26d-4261-8997-dd731b84e78e] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.006920818s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context auto-20201211205049-6575 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-8nm2m" [2f6ea982-4bf1-40e3-83c8-3a009a8c0cd0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-8nm2m" [2f6ea982-4bf1-40e3-83c8-3a009a8c0cd0] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.012138831s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201211205049-6575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:175: (dbg) Run:  kubectl --context false-20201211205049-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:188: (dbg) Run:  kubectl --context false-20201211205049-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/HairPin
net_test.go:188: (dbg) Non-zero exit: kubectl --context false-20201211205049-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.245046164s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:156: (dbg) Run:  kubectl --context auto-20201211205049-6575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:175: (dbg) Run:  kubectl --context auto-20201211205049-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:188: (dbg) Run:  kubectl --context auto-20201211205049-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:188: (dbg) Non-zero exit: kubectl --context auto-20201211205049-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.19915464s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (97.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20201211205208-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p calico-20201211205208-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=calico --driver=docker : (1m37.261749438s)
--- PASS: TestNetworkPlugins/group/calico/Start (97.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (66.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20201211205209-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=testdata/weavenet.yaml --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20201211205209-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=testdata/weavenet.yaml --driver=docker : (1m6.264583159s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (66.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (8.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:333: "cilium-bg8f4" [f8f27023-534f-4f43-80e8-d1fb471d800b] Running / Ready:ContainersNotReady (containers with unready status: [cilium-agent]) / ContainersReady:ContainersNotReady (containers with unready status: [cilium-agent])
net_test.go:88: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 8.054188202s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (8.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (1.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20201211205054-6575 "pgrep -a kubelet"
net_test.go:102: (dbg) Done: out/minikube-linux-amd64 ssh -p cilium-20201211205054-6575 "pgrep -a kubelet": (1.326895443s)
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (1.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context cilium-20201211205054-6575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-mr58n" [04812489-3177-4e5a-8961-3ffe9461d776] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:333: "netcat-66fbc655d5-mr58n" [04812489-3177-4e5a-8961-3ffe9461d776] Running
net_test.go:139: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.008664435s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201211205054-6575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:175: (dbg) Run:  kubectl --context cilium-20201211205054-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:188: (dbg) Run:  kubectl --context cilium-20201211205054-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20201211205249-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20201211205249-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --enable-default-cni=true --driver=docker : (59.750167452s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20201211205303-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20201211205303-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=kindnet --driver=docker : (1m6.652886098s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20201211205209-6575 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (18.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context custom-weave-20201211205209-6575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:125: (dbg) Done: kubectl --context custom-weave-20201211205209-6575 replace --force -f testdata/netcat-deployment.yaml: (3.777513315s)
net_test.go:139: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-bfcgc" [7fc0aa72-5ec4-4a4a-9c0f-a5b706911050] Pending
helpers_test.go:333: "netcat-66fbc655d5-bfcgc" [7fc0aa72-5ec4-4a4a-9c0f-a5b706911050] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:333: "netcat-66fbc655d5-bfcgc" [7fc0aa72-5ec4-4a4a-9c0f-a5b706911050] Running
net_test.go:139: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 9.014845669s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (18.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (57.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20201211205338-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20201211205338-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=bridge --driver=docker : (57.1850523s)
--- PASS: TestNetworkPlugins/group/bridge/Start (57.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:333: "calico-node-vr7cf" [fd5db159-72d4-4e9d-9ca6-fe6902ae96b9] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.021476632s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20201211205249-6575 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context enable-default-cni-20201211205249-6575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-dbnrf" [357b7d51-7cc6-4ba3-bff1-fdd16020100b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-dbnrf" [357b7d51-7cc6-4ba3-bff1-fdd16020100b] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.007207814s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20201211205208-6575 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context calico-20201211205208-6575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-h8gr9" [f2dc8a0c-d999-42bc-95c0-bb04ba5f4278] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-h8gr9" [f2dc8a0c-d999-42bc-95c0-bb04ba5f4278] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.011281529s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:156: (dbg) Run:  kubectl --context enable-default-cni-20201211205249-6575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-20201211205249-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20201211205249-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:156: (dbg) Run:  kubectl --context calico-20201211205208-6575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:175: (dbg) Run:  kubectl --context calico-20201211205208-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:188: (dbg) Run:  kubectl --context calico-20201211205208-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (266.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20201211205405-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20201211205405-6575 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --network-plugin=kubenet --driver=docker : (4m26.115283072s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (266.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (107.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20201211205407-6575 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20201211205407-6575 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0: (1m47.92712011s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (107.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:333: "kindnet-s2xn4" [826e79dc-0b29-4805-b137-62491bf8b962] Running
net_test.go:88: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.217501855s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20201211205303-6575 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (16.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context kindnet-20201211205303-6575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-bdp7m" [1ddb1a98-bf3b-4410-8351-91fa0fee59c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:333: "netcat-66fbc655d5-bdp7m" [1ddb1a98-bf3b-4410-8351-91fa0fee59c9] Running
net_test.go:139: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 16.00758745s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (16.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:156: (dbg) Run:  kubectl --context kindnet-20201211205303-6575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:175: (dbg) Run:  kubectl --context kindnet-20201211205303-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20201211205303-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20201211205338-6575 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context bridge-20201211205338-6575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-mxw2s" [413a00d2-ded4-4048-8bba-e15e38531aa6] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-mxw2s" [413a00d2-ded4-4048-8bba-e15e38531aa6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:333: "netcat-66fbc655d5-mxw2s" [413a00d2-ded4-4048-8bba-e15e38531aa6] Running
net_test.go:139: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.504464824s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.83s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/FirstStart (143.75s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p crio-20201211205436-6575 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p crio-20201211205436-6575 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7: (2m23.750928383s)
--- PASS: TestStartStop/group/crio/serial/FirstStart (143.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:156: (dbg) Run:  kubectl --context bridge-20201211205338-6575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:175: (dbg) Run:  kubectl --context bridge-20201211205338-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:188: (dbg) Run:  kubectl --context bridge-20201211205338-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20201211205452-6575 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.20.0
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20201211205452-6575 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.20.0: (52.301925077s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context embed-certs-20201211205452-6575 create -f testdata/busybox.yaml
start_stop_delete_test.go:163: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [809bec3d-32a6-4fcf-90db-24f82a72cad4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:333: "busybox" [809bec3d-32a6-4fcf-90db-24f82a72cad4] Running
start_stop_delete_test.go:163: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.015698549s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context embed-certs-20201211205452-6575 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20201211205452-6575 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20201211205452-6575 --alsologtostderr -v=3: (11.235473466s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context old-k8s-version-20201211205407-6575 create -f testdata/busybox.yaml
start_stop_delete_test.go:163: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [43670582-3bf3-11eb-9f64-024224fa3dc7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:333: "busybox" [43670582-3bf3-11eb-9f64-024224fa3dc7] Running
start_stop_delete_test.go:163: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.015313408s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context old-k8s-version-20201211205407-6575 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20201211205407-6575 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20201211205407-6575 --alsologtostderr -v=3: (11.051580574s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20201211205452-6575 -n embed-certs-20201211205452-6575
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20201211205452-6575 -n embed-certs-20201211205452-6575: exit status 7 (149.889325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20201211205452-6575
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (23.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20201211205452-6575 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.20.0

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20201211205452-6575 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.20.0: (22.833086344s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20201211205452-6575 -n embed-certs-20201211205452-6575
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (23.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20201211205407-6575 -n old-k8s-version-20201211205407-6575
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20201211205407-6575 -n old-k8s-version-20201211205407-6575: exit status 7 (133.864908ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20201211205407-6575
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (28.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20201211205407-6575 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20201211205407-6575 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0: (27.997669122s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20201211205407-6575 -n old-k8s-version-20201211205407-6575
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (28.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-584f46694c-h54pc" [cfb8cca2-e8da-449b-94f4-907990fb79f9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:333: "kubernetes-dashboard-584f46694c-h54pc" [cfb8cca2-e8da-449b-94f4-907990fb79f9] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.017153869s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:333: "kubernetes-dashboard-66766c77dc-vm24j" [66fdcf8e-3bf3-11eb-b0ff-0242c0a83bb0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:333: "kubernetes-dashboard-66766c77dc-vm24j" [66fdcf8e-3bf3-11eb-b0ff-0242c0a83bb0] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.013519969s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-584f46694c-h54pc" [cfb8cca2-e8da-449b-94f4-907990fb79f9] Running
start_stop_delete_test.go:224: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007222526s
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20201211205452-6575 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201211203409-6575
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20201211205452-6575 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20201211205452-6575 -n embed-certs-20201211205452-6575
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20201211205452-6575 -n embed-certs-20201211205452-6575: exit status 2 (378.376322ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20201211205452-6575 -n embed-certs-20201211205452-6575
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20201211205452-6575 -n embed-certs-20201211205452-6575: exit status 2 (415.193184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20201211205452-6575 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20201211205452-6575 -n embed-certs-20201211205452-6575
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20201211205452-6575 -n embed-certs-20201211205452-6575
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/DeployApp (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context crio-20201211205436-6575 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/crio/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [332ae3ea-09a5-451e-a097-f913ddeaead7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/DeployApp
helpers_test.go:333: "busybox" [332ae3ea-09a5-451e-a097-f913ddeaead7] Running

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/crio/serial/DeployApp: integration-test=busybox healthy within 12.019967764s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context crio-20201211205436-6575 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/crio/serial/DeployApp (12.51s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/FirstStart (83.28s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p containerd-20201211205701-6575 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.20.0

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p containerd-20201211205701-6575 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.20.0: (1m23.282440416s)
--- PASS: TestStartStop/group/containerd/serial/FirstStart (83.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-66766c77dc-vm24j" [66fdcf8e-3bf3-11eb-b0ff-0242c0a83bb0] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005917072s
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20201211205407-6575 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201211203409-6575
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20201211205407-6575 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20201211205407-6575 -n old-k8s-version-20201211205407-6575
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20201211205407-6575 -n old-k8s-version-20201211205407-6575: exit status 2 (391.397211ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20201211205407-6575 -n old-k8s-version-20201211205407-6575
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20201211205407-6575 -n old-k8s-version-20201211205407-6575: exit status 2 (398.350738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20201211205407-6575 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20201211205407-6575 -n old-k8s-version-20201211205407-6575
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20201211205407-6575 -n old-k8s-version-20201211205407-6575
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.32s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/Stop (24.7s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p crio-20201211205436-6575 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p crio-20201211205436-6575 --alsologtostderr -v=3: (24.703724247s)
--- PASS: TestStartStop/group/crio/serial/Stop (24.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20201211205716-6575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.20.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20201211205716-6575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.20.0: (53.045136262s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.05s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201211205436-6575 -n crio-20201211205436-6575
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201211205436-6575 -n crio-20201211205436-6575: exit status 7 (160.11695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p crio-20201211205436-6575
--- PASS: TestStartStop/group/crio/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/SecondStart (46.88s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p crio-20201211205436-6575 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p crio-20201211205436-6575 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7: (46.469899666s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201211205436-6575 -n crio-20201211205436-6575
--- PASS: TestStartStop/group/crio/serial/SecondStart (46.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20201211205716-6575 --alsologtostderr -v=3
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20201211205716-6575 --alsologtostderr -v=3: (11.118675751s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20201211205716-6575 -n newest-cni-20201211205716-6575
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20201211205716-6575 -n newest-cni-20201211205716-6575: exit status 7 (111.126384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20201211205716-6575
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20201211205716-6575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.20.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20201211205716-6575 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.20.0: (33.776336619s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20201211205716-6575 -n newest-cni-20201211205716-6575
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.27s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/DeployApp (9.71s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context containerd-20201211205701-6575 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/containerd/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [e0547182-afac-4df0-b3db-34e4ae3db5f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:333: "busybox" [e0547182-afac-4df0-b3db-34e4ae3db5f7] Running

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/containerd/serial/DeployApp: integration-test=busybox healthy within 9.021041906s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context containerd-20201211205701-6575 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/containerd/serial/DeployApp (9.71s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/crio/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-5ddb79bb9f-pw7ph" [cbd2451d-6393-4c60-b40d-4a7356b4b3dd] Running

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/crio/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015140489s
--- PASS: TestStartStop/group/crio/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/crio/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-5ddb79bb9f-pw7ph" [cbd2451d-6393-4c60-b40d-4a7356b4b3dd] Running

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/crio/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006661155s
--- PASS: TestStartStop/group/crio/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20201211205405-6575 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context kubenet-20201211205405-6575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-b492r" [0ee9c03c-de40-4bd9-9703-ea5c5fe60e37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-b492r" [0ee9c03c-de40-4bd9-9703-ea5c5fe60e37] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.006670838s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.36s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/Stop (21.29s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p containerd-20201211205701-6575 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p containerd-20201211205701-6575 --alsologtostderr -v=3: (21.290417225s)
--- PASS: TestStartStop/group/containerd/serial/Stop (21.29s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p crio-20201211205436-6575 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: kindest/kindnetd:0.5.4
start_stop_delete_test.go:232: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201211203409-6575
--- PASS: TestStartStop/group/crio/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/Pause (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p crio-20201211205436-6575 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201211205436-6575 -n crio-20201211205436-6575
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201211205436-6575 -n crio-20201211205436-6575: exit status 2 (414.413542ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20201211205436-6575 -n crio-20201211205436-6575
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20201211205436-6575 -n crio-20201211205436-6575: exit status 2 (415.610846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p crio-20201211205436-6575 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201211205436-6575 -n crio-20201211205436-6575
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20201211205436-6575 -n crio-20201211205436-6575
--- PASS: TestStartStop/group/crio/serial/Pause (3.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:156: (dbg) Run:  kubectl --context kubenet-20201211205405-6575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:175: (dbg) Run:  kubectl --context kubenet-20201211205405-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20201211205405-6575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:212: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:223: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20201211205716-6575 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201211203409-6575
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20201211205716-6575 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:238: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-20201211205716-6575 --alsologtostderr -v=1: (1.167713466s)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20201211205716-6575 -n newest-cni-20201211205716-6575
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20201211205716-6575 -n newest-cni-20201211205716-6575: exit status 2 (358.86398ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20201211205716-6575 -n newest-cni-20201211205716-6575
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20201211205716-6575 -n newest-cni-20201211205716-6575: exit status 2 (345.275497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20201211205716-6575 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20201211205716-6575 -n newest-cni-20201211205716-6575
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20201211205716-6575 -n newest-cni-20201211205716-6575
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.61s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20201211205701-6575 -n containerd-20201211205701-6575
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20201211205701-6575 -n containerd-20201211205701-6575: exit status 7 (117.977093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p containerd-20201211205701-6575
--- PASS: TestStartStop/group/containerd/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/SecondStart (21.52s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p containerd-20201211205701-6575 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.20.0

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p containerd-20201211205701-6575 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.20.0: (21.143952998s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20201211205701-6575 -n containerd-20201211205701-6575
--- PASS: TestStartStop/group/containerd/serial/SecondStart (21.52s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/UserAppExistsAfterStop (19.02s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/containerd/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-584f46694c-766q2" [5d6c06ee-236f-4c38-bfc2-caab358db2d2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:333: "kubernetes-dashboard-584f46694c-766q2" [5d6c06ee-236f-4c38-bfc2-caab358db2d2] Running
start_stop_delete_test.go:213: (dbg) TestStartStop/group/containerd/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.013810414s
--- PASS: TestStartStop/group/containerd/serial/UserAppExistsAfterStop (19.02s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/containerd/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-584f46694c-766q2" [5d6c06ee-236f-4c38-bfc2-caab358db2d2] Running
start_stop_delete_test.go:224: (dbg) TestStartStop/group/containerd/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009147297s
--- PASS: TestStartStop/group/containerd/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p containerd-20201211205701-6575 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: kindest/kindnetd:0.5.4
start_stop_delete_test.go:232: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: library/minikube-local-cache-test:functional-20201211203409-6575
--- PASS: TestStartStop/group/containerd/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p containerd-20201211205701-6575 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p containerd-20201211205701-6575 -n containerd-20201211205701-6575
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p containerd-20201211205701-6575 -n containerd-20201211205701-6575: exit status 2 (411.256989ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p containerd-20201211205701-6575 -n containerd-20201211205701-6575
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p containerd-20201211205701-6575 -n containerd-20201211205701-6575: exit status 2 (348.659913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p containerd-20201211205701-6575 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p containerd-20201211205701-6575 -n containerd-20201211205701-6575
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p containerd-20201211205701-6575 -n containerd-20201211205701-6575
--- PASS: TestStartStop/group/containerd/serial/Pause (3.16s)

                                                
                                    

Test skip (10/213)

x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:360: Skipping olm test till this timeout issue is solved https://github.com/operator-framework/operator-lifecycle-manager/issues/1534#issuecomment-632342257
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:110: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:182: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:33: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:66: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
--- SKIP: TestNetworkPlugins/group/flannel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
fn_tunnel_cmd_test.go:95: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
Copied to clipboard