Test Report: Docker_Linux 8417

                    
                      11096160fe2f8f3514641b2254ae78d1dc809e3d
                    
                

Test fail (3/128)

x
+
TestAddons (2403.44s)

                                                
                                                
=== RUN   TestAddons
--- FAIL: TestAddons (2403.44s)
addons_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20200609103954-5469 --wait=false --memory=2600 --alsologtostderr --addons=ingress --addons=registry --addons=metrics-server --addons=helm-tiller --addons=olm --driver=docker 
addons_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-20200609103954-5469 --wait=false --memory=2600 --alsologtostderr --addons=ingress --addons=registry --addons=metrics-server --addons=helm-tiller --addons=olm --driver=docker : signal: killed (40m0.002446037s)

                                                
                                                
-- stdout --
	* [addons-20200609103954-5469] minikube v1.11.0 on Debian 9.12
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube
	  - MINIKUBE_LOCATION=8417
	* Using the docker driver based on user configuration
	* Starting control plane node addons-20200609103954-5469 in cluster addons-20200609103954-5469
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	* Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
	  - kubeadm.pod-network-cidr=10.244.0.0/16
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
** stderr ** 
	I0609 10:39:54.142662   20865 start.go:98] hostinfo: {"hostname":"kvm-integration-slave7","uptime":1349,"bootTime":1591723045,"procs":220,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.12","kernelVersion":"4.9.0-12-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"ae41e7f6-8b8e-4d40-b77d-1ebb5a2d5fdb"}
	I0609 10:39:54.143306   20865 start.go:108] virtualization: kvm host
	I0609 10:39:54.147373   20865 notify.go:125] Checking for updates...
	I0609 10:39:54.150091   20865 driver.go:260] Setting default libvirt URI to qemu:///system
	I0609 10:39:54.216430   20865 docker.go:95] docker version: linux-19.03.11
	I0609 10:39:54.219236   20865 start.go:214] selected driver: docker
	I0609 10:39:54.219249   20865 start.go:611] validating driver "docker" against <nil>
	I0609 10:39:54.219270   20865 start.go:622] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
	I0609 10:39:54.219291   20865 start.go:940] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
	I0609 10:39:54.219356   20865 start_flags.go:218] no existing cluster config was found, will generate one from the flags 
	I0609 10:39:54.219498   20865 cli_runner.go:108] Run: docker system info --format "{{json .}}"
	I0609 10:39:54.320888   20865 start_flags.go:569] Waiting for no components: map[apiserver:false apps_running:false default_sa:false node_ready:false system_pods:false]
	I0609 10:39:55.100669   20865 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
	I0609 10:39:55.100703   20865 cache.go:113] gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 exists in daemon, skipping pull
	I0609 10:39:55.100711   20865 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
	I0609 10:39:55.100759   20865 preload.go:103] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
	I0609 10:39:55.100769   20865 cache.go:51] Caching tarball of preloaded images
	I0609 10:39:55.100782   20865 preload.go:129] Found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0609 10:39:55.100789   20865 cache.go:54] Finished verifying existence of preloaded tar for  v1.18.3 on docker
	I0609 10:39:55.101118   20865 profile.go:156] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/config.json ...
	I0609 10:39:55.101215   20865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/config.json: {Name:mkad76dd616d48b22ce3b838f2bf71a104cde066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 10:39:55.101443   20865 cache.go:178] Successfully downloaded all kic artifacts
	I0609 10:39:55.101473   20865 start.go:240] acquiring machines lock for addons-20200609103954-5469: {Name:mk46463277e7c0d6daf1ecc8c78c462b84291a90 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
	I0609 10:39:55.101520   20865 start.go:244] acquired machines lock for "addons-20200609103954-5469" in 37.229µs
	I0609 10:39:55.101543   20865 start.go:84] Provisioning new machine with config: &{Name:addons-20200609103954-5469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:2600 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:addons-20200609103954-5469 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: Service
CIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false node_ready:false system_pods:false]} &{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}
	I0609 10:39:55.101611   20865 start.go:121] createHost starting for "" (driver="docker")
	I0609 10:39:55.105655   20865 start.go:157] libmachine.API.Create for "addons-20200609103954-5469" (driver="docker")
	I0609 10:39:55.105698   20865 client.go:161] LocalClient.Create starting
	I0609 10:39:55.105751   20865 main.go:115] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem
	I0609 10:39:55.105792   20865 main.go:115] libmachine: Decoding PEM data...
	I0609 10:39:55.105813   20865 main.go:115] libmachine: Parsing certificate...
	I0609 10:39:55.105947   20865 main.go:115] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/cert.pem
	I0609 10:39:55.105972   20865 main.go:115] libmachine: Decoding PEM data...
	I0609 10:39:55.105984   20865 main.go:115] libmachine: Parsing certificate...
	I0609 10:39:55.106377   20865 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
	I0609 10:39:55.157045   20865 cli_runner.go:108] Run: docker volume create addons-20200609103954-5469 --label name.minikube.sigs.k8s.io=addons-20200609103954-5469 --label created_by.minikube.sigs.k8s.io=true
	I0609 10:39:55.210747   20865 oci.go:98] Successfully created a docker volume addons-20200609103954-5469
	W0609 10:39:55.210810   20865 oci.go:158] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0609 10:39:55.210848   20865 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
	I0609 10:39:55.211221   20865 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0609 10:39:55.211269   20865 preload.go:103] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
	I0609 10:39:55.211290   20865 kic.go:134] Starting extracting preloaded images to volume ...
	I0609 10:39:55.211355   20865 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20200609103954-5469:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
	I0609 10:39:55.311275   20865 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20200609103954-5469 --name addons-20200609103954-5469 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20200609103954-5469 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20200609103954-5469 --volume addons-20200609103954-5469:/var --cpus=2 --memory=2600mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
	I0609 10:39:55.954515   20865 cli_runner.go:108] Run: docker container inspect addons-20200609103954-5469 --format={{.State.Running}}
	I0609 10:39:56.019760   20865 cli_runner.go:108] Run: docker container inspect addons-20200609103954-5469 --format={{.State.Status}}
	I0609 10:39:56.079721   20865 oci.go:212] the created container "addons-20200609103954-5469" has a running status.
	I0609 10:39:56.079755   20865 kic.go:162] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa...
	I0609 10:39:56.382524   20865 kic_runner.go:179] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0609 10:39:56.821598   20865 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0609 10:39:56.821626   20865 kic_runner.go:114] Args: [docker exec --privileged addons-20200609103954-5469 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0609 10:40:00.290282   20865 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20200609103954-5469:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (5.078866919s)
	I0609 10:40:00.290317   20865 kic.go:139] duration metric: took 5.079026 seconds to extract preloaded images to volume
	I0609 10:40:00.290405   20865 cli_runner.go:108] Run: docker container inspect addons-20200609103954-5469 --format={{.State.Status}}
	I0609 10:40:00.343325   20865 machine.go:88] provisioning docker machine ...
	I0609 10:40:00.343371   20865 ubuntu.go:166] provisioning hostname "addons-20200609103954-5469"
	I0609 10:40:00.343461   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:00.396230   20865 main.go:115] libmachine: Using SSH client type: native
	I0609 10:40:00.396562   20865 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0609 10:40:00.396587   20865 main.go:115] libmachine: About to run SSH command:
	sudo hostname addons-20200609103954-5469 && echo "addons-20200609103954-5469" | sudo tee /etc/hostname
	I0609 10:40:00.538844   20865 main.go:115] libmachine: SSH cmd err, output: <nil>: addons-20200609103954-5469
	
	I0609 10:40:00.538924   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:00.592650   20865 main.go:115] libmachine: Using SSH client type: native
	I0609 10:40:00.592838   20865 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0609 10:40:00.592865   20865 main.go:115] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20200609103954-5469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20200609103954-5469/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20200609103954-5469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0609 10:40:00.711449   20865 main.go:115] libmachine: SSH cmd err, output: <nil>: 
	I0609 10:40:00.711493   20865 ubuntu.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemote
Path:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube}
	I0609 10:40:00.711520   20865 ubuntu.go:174] setting up certificates
	I0609 10:40:00.711530   20865 provision.go:82] configureAuth start
	I0609 10:40:00.711667   20865 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20200609103954-5469
	I0609 10:40:00.765398   20865 provision.go:131] copyHostCerts
	I0609 10:40:00.765490   20865 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.pem, removing ...
	I0609 10:40:00.765580   20865 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.pem (1038 bytes)
	I0609 10:40:00.765694   20865 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cert.pem, removing ...
	I0609 10:40:00.765738   20865 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cert.pem (1078 bytes)
	I0609 10:40:00.765821   20865 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/key.pem, removing ...
	I0609 10:40:00.765859   20865 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/key.pem (1679 bytes)
	I0609 10:40:00.765916   20865 provision.go:105] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca-key.pem org=jenkins.addons-20200609103954-5469 san=[172.17.0.3 localhost 127.0.0.1]
	I0609 10:40:01.210632   20865 provision.go:159] copyRemoteCerts
	I0609 10:40:01.210698   20865 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0609 10:40:01.210745   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:01.264518   20865 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa Username:docker}
	I0609 10:40:01.392342   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server.pem --> /etc/docker/server.pem (1143 bytes)
	I0609 10:40:01.415014   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0609 10:40:01.437025   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1038 bytes)
	I0609 10:40:01.459479   20865 provision.go:85] duration metric: configureAuth took 747.921521ms
	I0609 10:40:01.459522   20865 ubuntu.go:190] setting minikube options for container-runtime
	I0609 10:40:01.459735   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:01.513942   20865 main.go:115] libmachine: Using SSH client type: native
	I0609 10:40:01.514124   20865 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0609 10:40:01.514156   20865 main.go:115] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0609 10:40:01.635756   20865 main.go:115] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0609 10:40:01.635790   20865 ubuntu.go:71] root file system type: overlay
	I0609 10:40:01.636025   20865 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
	I0609 10:40:01.636107   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:01.689108   20865 main.go:115] libmachine: Using SSH client type: native
	I0609 10:40:01.689355   20865 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0609 10:40:01.689478   20865 main.go:115] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	
	[Service]
	Type=notify
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0609 10:40:01.827466   20865 main.go:115] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	
	[Service]
	Type=notify
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP 
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0609 10:40:01.827592   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:01.881298   20865 main.go:115] libmachine: Using SSH client type: native
	I0609 10:40:01.881493   20865 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0609 10:40:01.881519   20865 main.go:115] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0609 10:40:02.451803   20865 main.go:115] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2020-06-09 17:40:01.819805989 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	
	I0609 10:40:02.451835   20865 machine.go:91] provisioned docker machine in 2.108481729s
	I0609 10:40:02.451846   20865 client.go:164] LocalClient.Create took 7.346143039s
	I0609 10:40:02.451864   20865 start.go:162] duration metric: libmachine.API.Create for "addons-20200609103954-5469" took 7.346210603s
	I0609 10:40:02.451880   20865 start.go:203] post-start starting for "addons-20200609103954-5469" (driver="docker")
	I0609 10:40:02.451888   20865 start.go:213] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0609 10:40:02.451962   20865 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0609 10:40:02.452015   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:02.504648   20865 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa Username:docker}
	I0609 10:40:02.596483   20865 ssh_runner.go:148] Run: cat /etc/os-release
	I0609 10:40:02.600460   20865 main.go:115] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0609 10:40:02.600496   20865 main.go:115] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0609 10:40:02.600510   20865 main.go:115] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0609 10:40:02.600524   20865 info.go:96] Remote host: Ubuntu 19.10
	I0609 10:40:02.600538   20865 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/addons for local assets ...
	I0609 10:40:02.600600   20865 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/files for local assets ...
	I0609 10:40:02.600633   20865 start.go:206] post-start completed in 148.743397ms
	I0609 10:40:02.600978   20865 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20200609103954-5469
	I0609 10:40:02.652795   20865 profile.go:156] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/config.json ...
	I0609 10:40:02.653086   20865 start.go:124] duration metric: createHost completed in 7.55146358s
	I0609 10:40:02.653110   20865 start.go:75] releasing machines lock for "addons-20200609103954-5469", held for 7.551577249s
	I0609 10:40:02.653210   20865 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20200609103954-5469
	I0609 10:40:02.705623   20865 ssh_runner.go:148] Run: systemctl --version
	I0609 10:40:02.705696   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:02.705736   20865 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0609 10:40:02.705825   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:02.759113   20865 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa Username:docker}
	I0609 10:40:02.760405   20865 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa Username:docker}
	I0609 10:40:02.944284   20865 ssh_runner.go:148] Run: sudo systemctl cat docker.service
	I0609 10:40:02.957173   20865 cruntime.go:189] skipping containerd shutdown because we are bound to it
	I0609 10:40:02.957239   20865 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
	I0609 10:40:02.970800   20865 ssh_runner.go:148] Run: sudo systemctl daemon-reload
	I0609 10:40:03.027482   20865 ssh_runner.go:148] Run: sudo systemctl start docker
	I0609 10:40:03.040466   20865 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
	I0609 10:40:03.115305   20865 cli_runner.go:108] Run: docker network ls --filter name=bridge --format {{.ID}}
	I0609 10:40:03.166699   20865 cli_runner.go:108] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" 1fddf8d61680
	I0609 10:40:03.220413   20865 network.go:77] got host ip for mount in container by inspect docker network: 172.17.0.1
	I0609 10:40:03.220446   20865 start.go:268] checking
	I0609 10:40:03.220507   20865 ssh_runner.go:148] Run: grep 172.17.0.1	host.minikube.internal$ /etc/hosts
	I0609 10:40:03.225299   20865 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "172.17.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0609 10:40:03.242043   20865 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
	I0609 10:40:03.242101   20865 preload.go:103] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
	I0609 10:40:03.242166   20865 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 10:40:03.303280   20865 docker.go:379] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.3
	k8s.gcr.io/kube-controller-manager:v1.18.3
	k8s.gcr.io/kube-scheduler:v1.18.3
	k8s.gcr.io/kube-apiserver:v1.18.3
	kubernetesui/dashboard:v2.0.0
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	
	-- /stdout --
	I0609 10:40:03.303316   20865 docker.go:317] Images already preloaded, skipping extraction
	I0609 10:40:03.303377   20865 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 10:40:03.364868   20865 docker.go:379] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.3
	k8s.gcr.io/kube-apiserver:v1.18.3
	k8s.gcr.io/kube-scheduler:v1.18.3
	k8s.gcr.io/kube-controller-manager:v1.18.3
	kubernetesui/dashboard:v2.0.0
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	
	-- /stdout --
	I0609 10:40:03.364902   20865 cache_images.go:69] Images are preloaded, skipping loading
	I0609 10:40:03.364964   20865 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd ClusterName:addons-20200609103954-5469 NodeName:addons-20200609103954-5469 DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0609 10:40:03.365153   20865 kubeadm.go:128] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "addons-20200609103954-5469"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.18.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 172.17.0.3:10249
	
	I0609 10:40:03.365229   20865 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
	I0609 10:40:03.431801   20865 kubeadm.go:755] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=addons-20200609103954-5469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3 --pod-manifest-path=/etc/kubernetes/manifests
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.3 ClusterName:addons-20200609103954-5469 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
	I0609 10:40:03.431891   20865 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.3
	I0609 10:40:03.441350   20865 binaries.go:43] Found k8s binaries, skipping transfer
	I0609 10:40:03.441429   20865 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0609 10:40:03.451508   20865 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (550 bytes)
	I0609 10:40:03.474987   20865 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
	I0609 10:40:03.497352   20865 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1476 bytes)
	I0609 10:40:03.520339   20865 start.go:268] checking
	I0609 10:40:03.520446   20865 ssh_runner.go:148] Run: grep 172.17.0.3	control-plane.minikube.internal$ /etc/hosts
	I0609 10:40:03.524775   20865 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0609 10:40:03.537300   20865 ssh_runner.go:148] Run: sudo systemctl daemon-reload
	I0609 10:40:03.595159   20865 ssh_runner.go:148] Run: sudo systemctl start kubelet
	I0609 10:40:03.610478   20865 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469 for IP: 172.17.0.3
	I0609 10:40:03.610572   20865 certs.go:169] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.key
	I0609 10:40:03.610643   20865 certs.go:169] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/proxy-client-ca.key
	I0609 10:40:03.610700   20865 certs.go:273] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/client.key
	I0609 10:40:03.610710   20865 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/client.crt with IP's: []
	I0609 10:40:03.684818   20865 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/client.crt ...
	I0609 10:40:03.684851   20865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/client.crt: {Name:mk1bc873542e690f8bfcc70d38581691c1ad77cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 10:40:03.685090   20865 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/client.key ...
	I0609 10:40:03.685116   20865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/client.key: {Name:mk2238ebc16bbdee82f25dd1e996a631a0e2398f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 10:40:03.685267   20865 certs.go:273] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.key.0f3e66d0
	I0609 10:40:03.685285   20865 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
	I0609 10:40:03.914181   20865 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.crt.0f3e66d0 ...
	I0609 10:40:03.914224   20865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.crt.0f3e66d0: {Name:mkcb315b0e6f9cf897246c0021e8f4a1550d6e50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 10:40:03.914508   20865 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.key.0f3e66d0 ...
	I0609 10:40:03.914529   20865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.key.0f3e66d0: {Name:mk379c091b5ba5d49c150180d1a7f11d1f47da6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 10:40:03.914662   20865 certs.go:284] copying /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.crt.0f3e66d0 -> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.crt
	I0609 10:40:03.914746   20865 certs.go:288] copying /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.key.0f3e66d0 -> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.key
	I0609 10:40:03.914811   20865 certs.go:273] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/proxy-client.key
	I0609 10:40:03.914821   20865 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/proxy-client.crt with IP's: []
	I0609 10:40:04.200123   20865 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/proxy-client.crt ...
	I0609 10:40:04.200158   20865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/proxy-client.crt: {Name:mkd9c74255634e150ad8b18c6421a6f5c1a46fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 10:40:04.200376   20865 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/proxy-client.key ...
	I0609 10:40:04.200392   20865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/proxy-client.key: {Name:mkc4fe461b07c831aec7527c42202d1408fd6420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 10:40:04.200617   20865 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca-key.pem (1679 bytes)
	I0609 10:40:04.200669   20865 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem (1038 bytes)
	I0609 10:40:04.200701   20865 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/cert.pem (1078 bytes)
	I0609 10:40:04.200739   20865 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/key.pem (1679 bytes)
	I0609 10:40:04.202833   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
	I0609 10:40:04.226099   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0609 10:40:04.249587   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
	I0609 10:40:04.271774   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/addons-20200609103954-5469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0609 10:40:04.295450   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
	I0609 10:40:04.317791   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0609 10:40:04.339472   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
	I0609 10:40:04.363322   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0609 10:40:04.387254   20865 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
	I0609 10:40:04.410416   20865 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
	I0609 10:40:04.433870   20865 ssh_runner.go:148] Run: openssl version
	I0609 10:40:04.440943   20865 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0609 10:40:04.451978   20865 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0609 10:40:04.456420   20865 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Jun  9 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0609 10:40:04.456502   20865 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0609 10:40:04.463910   20865 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0609 10:40:04.474217   20865 kubeadm.go:293] StartCluster: {Name:addons-20200609103954-5469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:2600 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:addons-20200609103954-5469 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 Imag
eRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false node_ready:false system_pods:false]}
	I0609 10:40:04.474453   20865 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0609 10:40:04.533911   20865 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0609 10:40:04.543900   20865 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0609 10:40:04.553579   20865 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
	I0609 10:40:04.553658   20865 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0609 10:40:04.562831   20865 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0609 10:40:04.562880   20865 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0609 10:40:19.352084   20865 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (14.789166868s)
	I0609 10:40:19.352145   20865 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0609 10:40:19.352257   20865 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 10:40:19.352302   20865 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl label nodes minikube.k8s.io/version=v1.11.0 minikube.k8s.io/commit=b72d7683536818416863536d77e7e628181d7fce minikube.k8s.io/name=addons-20200609103954-5469 minikube.k8s.io/updated_at=2020_06_09T10_40_19_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 10:40:19.363706   20865 ops.go:35] apiserver oom_adj: -16
	I0609 10:40:19.957448   20865 kubeadm.go:890] duration metric: took 605.288844ms to wait for elevateKubeSystemPrivileges.
	I0609 10:40:19.957498   20865 kubeadm.go:295] StartCluster complete in 15.483290711s
	I0609 10:40:19.957516   20865 settings.go:123] acquiring lock: {Name:mk74ff71247278842614f4131323d1ef71694d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 10:40:19.957619   20865 settings.go:131] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/kubeconfig
	I0609 10:40:19.958593   20865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/kubeconfig: {Name:mk9f6e1fe6f35bf79c7fdc4e5b57845e1ecc67e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 10:40:19.958860   20865 addons.go:320] enableAddons start: toEnable=map[], additional=[ingress registry metrics-server helm-tiller olm]
	I0609 10:40:19.961910   20865 kubeadm.go:386] skip waiting for components based on config.
	I0609 10:40:19.961960   20865 addons.go:50] Setting storage-provisioner=true in profile "addons-20200609103954-5469"
	I0609 10:40:19.961989   20865 addons.go:126] Setting addon storage-provisioner=true in "addons-20200609103954-5469"
	W0609 10:40:19.961998   20865 addons.go:135] addon storage-provisioner should already be in state true
	I0609 10:40:19.962018   20865 host.go:65] Checking if "addons-20200609103954-5469" exists ...
	I0609 10:40:19.962025   20865 addons.go:50] Setting default-storageclass=true in profile "addons-20200609103954-5469"
	I0609 10:40:19.962041   20865 addons.go:50] Setting helm-tiller=true in profile "addons-20200609103954-5469"
	I0609 10:40:19.962059   20865 addons.go:266] enableOrDisableStorageClasses default-storageclass=true on "addons-20200609103954-5469"
	I0609 10:40:19.962069   20865 addons.go:126] Setting addon helm-tiller=true in "addons-20200609103954-5469"
	I0609 10:40:19.962080   20865 addons.go:50] Setting olm=true in profile "addons-20200609103954-5469"
	I0609 10:40:19.962100   20865 host.go:65] Checking if "addons-20200609103954-5469" exists ...
	I0609 10:40:19.962109   20865 addons.go:50] Setting registry=true in profile "addons-20200609103954-5469"
	I0609 10:40:19.962122   20865 addons.go:126] Setting addon olm=true in "addons-20200609103954-5469"
	I0609 10:40:19.962138   20865 addons.go:126] Setting addon registry=true in "addons-20200609103954-5469"
	I0609 10:40:19.962145   20865 host.go:65] Checking if "addons-20200609103954-5469" exists ...
	I0609 10:40:19.962129   20865 addons.go:50] Setting metrics-server=true in profile "addons-20200609103954-5469"
	I0609 10:40:19.962165   20865 host.go:65] Checking if "addons-20200609103954-5469" exists ...
	I0609 10:40:19.962175   20865 addons.go:126] Setting addon metrics-server=true in "addons-20200609103954-5469"
	I0609 10:40:19.962197   20865 host.go:65] Checking if "addons-20200609103954-5469" exists ...
	I0609 10:40:19.962617   20865 cli_runner.go:108] Run: docker container inspect addons-20200609103954-5469 --format={{.State.Status}}
	I0609 10:40:19.962018   20865 addons.go:50] Setting ingress=true in profile "addons-20200609103954-5469"
	I0609 10:40:19.962816   20865 cli_runner.go:108] Run: docker container inspect addons-20200609103954-5469 --format={{.State.Status}}
	I0609 10:40:19.962829   20865 addons.go:126] Setting addon ingress=true in "addons-20200609103954-5469"
	I0609 10:40:19.962839   20865 cli_runner.go:108] Run: docker container inspect addons-20200609103954-5469 --format={{.State.Status}}
	I0609 10:40:19.962861   20865 host.go:65] Checking if "addons-20200609103954-5469" exists ...
	I0609 10:40:19.962868   20865 cli_runner.go:108] Run: docker container inspect addons-20200609103954-5469 --format={{.State.Status}}
	I0609 10:40:19.962879   20865 cli_runner.go:108] Run: docker container inspect addons-20200609103954-5469 --format={{.State.Status}}
	I0609 10:40:19.962890   20865 cli_runner.go:108] Run: docker container inspect addons-20200609103954-5469 --format={{.State.Status}}
	I0609 10:40:19.963528   20865 cli_runner.go:108] Run: docker container inspect addons-20200609103954-5469 --format={{.State.Status}}
	I0609 10:40:19.965155   20865 node_conditions.go:99] verifying NodePressure condition ...
	I0609 10:40:19.992227   20865 node_conditions.go:111] node storage ephemeral capacity is 515928484Ki
	I0609 10:40:19.992269   20865 node_conditions.go:112] node cpu capacity is 8
	I0609 10:40:19.992292   20865 node_conditions.go:102] duration metric: took 27.117114ms to run NodePressure ...
	I0609 10:40:20.047368   20865 addons.go:233] installing /etc/kubernetes/addons/crds.yaml
	I0609 10:40:20.047404   20865 ssh_runner.go:215] scp deploy/addons/olm/crds.yaml --> /etc/kubernetes/addons/crds.yaml (814667 bytes)
	I0609 10:40:20.047485   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:20.069922   20865 addons.go:233] installing /etc/kubernetes/addons/registry-rc.yaml
	I0609 10:40:20.069953   20865 ssh_runner.go:215] scp deploy/addons/registry/registry-rc.yaml.tmpl --> /etc/kubernetes/addons/registry-rc.yaml (748 bytes)
	I0609 10:40:20.070025   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:20.082054   20865 addons.go:233] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0609 10:40:20.082084   20865 ssh_runner.go:215] scp deploy/addons/metrics-server/metrics-apiservice.yaml.tmpl --> /etc/kubernetes/addons/metrics-apiservice.yaml (401 bytes)
	I0609 10:40:20.082171   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:20.085157   20865 addons.go:126] Setting addon default-storageclass=true in "addons-20200609103954-5469"
	W0609 10:40:20.085187   20865 addons.go:135] addon default-storageclass should already be in state true
	I0609 10:40:20.085206   20865 host.go:65] Checking if "addons-20200609103954-5469" exists ...
	I0609 10:40:20.085653   20865 addons.go:233] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0609 10:40:20.085683   20865 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (1709 bytes)
	I0609 10:40:20.085742   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:20.085898   20865 cli_runner.go:108] Run: docker container inspect addons-20200609103954-5469 --format={{.State.Status}}
	I0609 10:40:20.100404   20865 addons.go:233] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0609 10:40:20.100439   20865 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2360 bytes)
	I0609 10:40:20.100510   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:20.103152   20865 addons.go:233] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0609 10:40:20.103185   20865 ssh_runner.go:215] scp deploy/addons/ingress/ingress-configmap.yaml.tmpl --> /etc/kubernetes/addons/ingress-configmap.yaml (1251 bytes)
	I0609 10:40:20.103257   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:20.139504   20865 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa Username:docker}
	I0609 10:40:20.178843   20865 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa Username:docker}
	I0609 10:40:20.196501   20865 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa Username:docker}
	I0609 10:40:20.202605   20865 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa Username:docker}
	I0609 10:40:20.206548   20865 addons.go:233] installing /etc/kubernetes/addons/storageclass.yaml
	I0609 10:40:20.206589   20865 ssh_runner.go:215] scp deploy/addons/storageclass/storageclass.yaml.tmpl --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0609 10:40:20.206811   20865 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20200609103954-5469
	I0609 10:40:20.215453   20865 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa Username:docker}
	I0609 10:40:20.222922   20865 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa Username:docker}
	I0609 10:40:20.279962   20865 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/addons-20200609103954-5469/id_rsa Username:docker}
	I0609 10:40:20.363486   20865 addons.go:233] installing /etc/kubernetes/addons/olm.yaml
	I0609 10:40:20.363523   20865 ssh_runner.go:215] scp deploy/addons/olm/olm.yaml --> /etc/kubernetes/addons/olm.yaml (9184 bytes)
	I0609 10:40:20.372974   20865 addons.go:233] installing /etc/kubernetes/addons/registry-svc.yaml
	I0609 10:40:20.373005   20865 ssh_runner.go:215] scp deploy/addons/registry/registry-svc.yaml.tmpl --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0609 10:40:20.546855   20865 addons.go:233] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0609 10:40:20.546932   20865 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0609 10:40:20.547342   20865 addons.go:233] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0609 10:40:20.547567   20865 ssh_runner.go:215] scp deploy/addons/registry/registry-proxy.yaml.tmpl --> /etc/kubernetes/addons/registry-proxy.yaml (878 bytes)
	I0609 10:40:20.547533   20865 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0609 10:40:20.558867   20865 addons.go:233] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0609 10:40:20.558895   20865 ssh_runner.go:215] scp deploy/addons/ingress/ingress-rbac.yaml.tmpl --> /etc/kubernetes/addons/ingress-rbac.yaml (4828 bytes)
	I0609 10:40:20.650201   20865 addons.go:233] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0609 10:40:20.650230   20865 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (699 bytes)
	I0609 10:40:20.661430   20865 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0609 10:40:20.668809   20865 addons.go:233] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0609 10:40:20.668841   20865 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0609 10:40:20.742448   20865 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0609 10:40:20.743806   20865 addons.go:233] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0609 10:40:20.743865   20865 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (8421 bytes)
	I0609 10:40:20.750259   20865 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0609 10:40:20.852698   20865 addons.go:233] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0609 10:40:20.852765   20865 ssh_runner.go:215] scp deploy/addons/metrics-server/metrics-server-service.yaml.tmpl --> /etc/kubernetes/addons/metrics-server-service.yaml (401 bytes)
	I0609 10:40:20.946055   20865 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0609 10:40:20.948424   20865 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0609 10:40:21.057636   20865 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0609 10:40:24.146806   20865 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (3.599137698s)
	W0609 10:40:24.146872   20865 addons.go:256] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0609 10:40:24.146890   20865 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.485417687s)
	I0609 10:40:24.146968   20865 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.404487628s)
	I0609 10:40:24.147032   20865 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.396707155s)
	I0609 10:40:24.147184   20865 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (3.201056201s)
	I0609 10:40:24.147258   20865 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (3.198801409s)
	I0609 10:40:24.147318   20865 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.089591688s)
	I0609 10:40:24.438232   20865 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml

                                                
                                                
** /stderr **
addons_test.go:46: out/minikube-linux-amd64 start -p addons-20200609103954-5469 --wait=false --memory=2600 --alsologtostderr --addons=ingress --addons=registry --addons=metrics-server --addons=helm-tiller --addons=olm --driver=docker  failed: signal: killed
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p addons-20200609103954-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p addons-20200609103954-5469: (3.427365802s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (72.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
--- FAIL: TestMultiNode/serial/StartAfterStop (72.86s)
multinode_test.go:144: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:154: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 node start m03 --alsologtostderr
multinode_test.go:154: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20200609112134-5469 node start m03 --alsologtostderr: exit status 70 (1m8.608112391s)

                                                
                                                
-- stdout --
	* Starting node multinode-20200609112134-5469-m03 in cluster multinode-20200609112134-5469
	* Restarting existing docker container for "multinode-20200609112134-5469-m03" ...
	* Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0609 11:23:06.458064   22822 mustload.go:64] Loading cluster: multinode-20200609112134-5469
	I0609 11:23:06.458890   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m03 --format={{.State.Status}}
	W0609 11:23:06.517973   22822 host.go:57] "multinode-20200609112134-5469-m03" host status: Stopped
	I0609 11:23:07.341613   22822 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
	I0609 11:23:07.341651   22822 cache.go:113] gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 exists in daemon, skipping pull
	I0609 11:23:07.341662   22822 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
	I0609 11:23:07.341707   22822 preload.go:103] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
	I0609 11:23:07.341740   22822 cache.go:51] Caching tarball of preloaded images
	I0609 11:23:07.341754   22822 preload.go:129] Found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0609 11:23:07.341762   22822 cache.go:54] Finished verifying existence of preloaded tar for  v1.18.3 on docker
	I0609 11:23:07.341882   22822 profile.go:156] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/multinode-20200609112134-5469/config.json ...
	I0609 11:23:07.342133   22822 cache.go:178] Successfully downloaded all kic artifacts
	I0609 11:23:07.342164   22822 start.go:240] acquiring machines lock for multinode-20200609112134-5469-m03: {Name:mk46463277e7c0d6daf1ecc8c78c462b84291a90 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
	I0609 11:23:07.342387   22822 start.go:244] acquired machines lock for "multinode-20200609112134-5469-m03" in 196.804µs
	I0609 11:23:07.342415   22822 start.go:88] Skipping create...Using existing machine configuration
	I0609 11:23:07.342464   22822 fix.go:53] fixHost starting: m03
	I0609 11:23:07.342827   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m03 --format={{.State.Status}}
	I0609 11:23:07.399774   22822 fix.go:105] recreateIfNeeded on multinode-20200609112134-5469-m03: state=Stopped err=<nil>
	W0609 11:23:07.399808   22822 fix.go:131] unexpected machine state, will restart: <nil>
	I0609 11:23:07.404794   22822 cli_runner.go:108] Run: docker start multinode-20200609112134-5469-m03
	I0609 11:23:07.852661   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m03 --format={{.State.Status}}
	I0609 11:23:07.909825   22822 kic.go:318] container "multinode-20200609112134-5469-m03" state is running.
	I0609 11:23:07.910394   22822 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20200609112134-5469-m03
	I0609 11:23:07.974738   22822 profile.go:156] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/multinode-20200609112134-5469/config.json ...
	I0609 11:23:07.975016   22822 machine.go:88] provisioning docker machine ...
	I0609 11:23:07.975055   22822 ubuntu.go:166] provisioning hostname "multinode-20200609112134-5469-m03"
	I0609 11:23:07.975116   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
	I0609 11:23:08.038619   22822 main.go:115] libmachine: Using SSH client type: native
	I0609 11:23:08.039016   22822 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
	I0609 11:23:08.039055   22822 main.go:115] libmachine: About to run SSH command:
	sudo hostname multinode-20200609112134-5469-m03 && echo "multinode-20200609112134-5469-m03" | sudo tee /etc/hostname
	I0609 11:23:08.039859   22822 main.go:115] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42752->127.0.0.1:32803: read: connection reset by peer
	I0609 11:23:11.187866   22822 main.go:115] libmachine: SSH cmd err, output: <nil>: multinode-20200609112134-5469-m03
	
	I0609 11:23:11.187965   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
	I0609 11:23:11.243471   22822 main.go:115] libmachine: Using SSH client type: native
	I0609 11:23:11.243739   22822 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
	I0609 11:23:11.243784   22822 main.go:115] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20200609112134-5469-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20200609112134-5469-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20200609112134-5469-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0609 11:23:11.367872   22822 main.go:115] libmachine: SSH cmd err, output: <nil>: 
	I0609 11:23:11.367926   22822 ubuntu.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerK
eyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube}
	I0609 11:23:11.367963   22822 ubuntu.go:174] setting up certificates
	I0609 11:23:11.367976   22822 provision.go:82] configureAuth start
	I0609 11:23:11.368083   22822 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20200609112134-5469-m03
	I0609 11:23:11.423392   22822 provision.go:131] copyHostCerts
	I0609 11:23:11.423475   22822 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cert.pem, removing ...
	I0609 11:23:11.423544   22822 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cert.pem (1078 bytes)
	I0609 11:23:11.423669   22822 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/key.pem, removing ...
	I0609 11:23:11.423706   22822 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/key.pem (1679 bytes)
	I0609 11:23:11.423805   22822 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.pem, removing ...
	I0609 11:23:11.423841   22822 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.pem (1038 bytes)
	I0609 11:23:11.423910   22822 provision.go:105] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca-key.pem org=jenkins.multinode-20200609112134-5469-m03 san=[172.17.0.2 localhost 127.0.0.1]
	I0609 11:23:11.749579   22822 provision.go:159] copyRemoteCerts
	I0609 11:23:11.749657   22822 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0609 11:23:11.749713   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
	I0609 11:23:11.804053   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m03/id_rsa Username:docker}
	I0609 11:23:11.892202   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1038 bytes)
	I0609 11:23:11.915277   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server.pem --> /etc/docker/server.pem (1155 bytes)
	I0609 11:23:11.938640   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0609 11:23:11.960814   22822 provision.go:85] duration metric: configureAuth took 592.818718ms
	I0609 11:23:11.960841   22822 ubuntu.go:190] setting minikube options for container-runtime
	I0609 11:23:11.961113   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
	I0609 11:23:12.016576   22822 main.go:115] libmachine: Using SSH client type: native
	I0609 11:23:12.016800   22822 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
	I0609 11:23:12.016817   22822 main.go:115] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0609 11:23:12.139602   22822 main.go:115] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0609 11:23:12.139639   22822 ubuntu.go:71] root file system type: overlay
	I0609 11:23:12.139834   22822 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
	I0609 11:23:12.139917   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
	I0609 11:23:12.193968   22822 main.go:115] libmachine: Using SSH client type: native
	I0609 11:23:12.194189   22822 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
	I0609 11:23:12.194303   22822 main.go:115] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	
	[Service]
	Type=notify
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0609 11:23:12.327844   22822 main.go:115] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	
	[Service]
	Type=notify
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP 
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0609 11:23:12.327941   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
	I0609 11:23:12.383146   22822 main.go:115] libmachine: Using SSH client type: native
	I0609 11:23:12.383374   22822 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
	I0609 11:23:12.383404   22822 main.go:115] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0609 11:23:12.516233   22822 main.go:115] libmachine: SSH cmd err, output: <nil>: 
	I0609 11:23:12.516269   22822 machine.go:91] provisioned docker machine in 4.541226375s
	I0609 11:23:12.516283   22822 start.go:203] post-start starting for "multinode-20200609112134-5469-m03" (driver="docker")
	I0609 11:23:12.516293   22822 start.go:213] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0609 11:23:12.516366   22822 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0609 11:23:12.516429   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
	I0609 11:23:12.569916   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m03/id_rsa Username:docker}
	I0609 11:23:12.660716   22822 ssh_runner.go:148] Run: cat /etc/os-release
	I0609 11:23:12.664748   22822 main.go:115] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0609 11:23:12.664782   22822 main.go:115] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0609 11:23:12.664793   22822 main.go:115] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0609 11:23:12.664799   22822 info.go:96] Remote host: Ubuntu 19.10
	I0609 11:23:12.664812   22822 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/addons for local assets ...
	I0609 11:23:12.664868   22822 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/files for local assets ...
	I0609 11:23:12.665012   22822 filesync.go:141] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/files/etc/test/nested/copy/5469/hosts -> hosts in /etc/test/nested/copy/5469
	I0609 11:23:12.665059   22822 ssh_runner.go:148] Run: sudo mkdir -p /etc/test/nested/copy/5469
	I0609 11:23:12.674626   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/files/etc/test/nested/copy/5469/hosts --> /etc/test/nested/copy/5469/hosts (40 bytes)
	I0609 11:23:12.699199   22822 start.go:206] post-start completed in 182.896432ms
	I0609 11:23:12.699233   22822 fix.go:55] fixHost completed within 5.356770312s
	I0609 11:23:12.699242   22822 start.go:75] releasing machines lock for "multinode-20200609112134-5469-m03", held for 5.356837452s
	I0609 11:23:12.699346   22822 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20200609112134-5469-m03
	I0609 11:23:12.753742   22822 ssh_runner.go:148] Run: systemctl --version
	I0609 11:23:12.753806   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
	I0609 11:23:12.753898   22822 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0609 11:23:12.754098   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
	I0609 11:23:12.811830   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m03/id_rsa Username:docker}
	I0609 11:23:12.812910   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m03/id_rsa Username:docker}
	I0609 11:23:12.916362   22822 ssh_runner.go:148] Run: sudo systemctl cat docker.service
	I0609 11:23:12.931246   22822 cruntime.go:189] skipping containerd shutdown because we are bound to it
	I0609 11:23:12.931324   22822 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
	I0609 11:23:12.945355   22822 ssh_runner.go:148] Run: sudo systemctl daemon-reload
	I0609 11:23:13.012495   22822 ssh_runner.go:148] Run: sudo systemctl start docker
	I0609 11:23:13.025258   22822 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
	I0609 11:23:13.096026   22822 cli_runner.go:108] Run: docker network ls --filter name=bridge --format {{.ID}}
	I0609 11:23:13.149310   22822 cli_runner.go:108] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" 1fddf8d61680
	I0609 11:23:13.202411   22822 network.go:77] got host ip for mount in container by inspect docker network: 172.17.0.1
	I0609 11:23:13.202495   22822 start.go:268] checking
	I0609 11:23:13.202555   22822 ssh_runner.go:148] Run: grep 172.17.0.1	host.minikube.internal$ /etc/hosts
	I0609 11:23:13.207207   22822 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "172.17.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0609 11:23:13.219600   22822 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/multinode-20200609112134-5469 for IP: 172.17.0.2
	I0609 11:23:13.219668   22822 certs.go:169] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.key
	I0609 11:23:13.219687   22822 certs.go:169] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/proxy-client-ca.key
	I0609 11:23:13.219775   22822 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/5469.pem (1338 bytes)
	W0609 11:23:13.219826   22822 certs.go:344] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/5469_empty.pem, impossibly tiny 0 bytes
	I0609 11:23:13.219840   22822 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca-key.pem (1679 bytes)
	I0609 11:23:13.219878   22822 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem (1038 bytes)
	I0609 11:23:13.219936   22822 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/cert.pem (1078 bytes)
	I0609 11:23:13.219962   22822 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/key.pem (1679 bytes)
	I0609 11:23:13.220987   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
	I0609 11:23:13.244147   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0609 11:23:13.266882   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
	I0609 11:23:13.291919   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0609 11:23:13.320280   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/5469.pem --> /usr/share/ca-certificates/5469.pem (1338 bytes)
	I0609 11:23:13.344294   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
	I0609 11:23:13.368778   22822 ssh_runner.go:148] Run: openssl version
	I0609 11:23:13.375626   22822 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0609 11:23:13.385688   22822 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0609 11:23:13.390164   22822 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Jun  9 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0609 11:23:13.390224   22822 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0609 11:23:13.396745   22822 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0609 11:23:13.405593   22822 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469.pem && ln -fs /usr/share/ca-certificates/5469.pem /etc/ssl/certs/5469.pem"
	I0609 11:23:13.415302   22822 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/5469.pem
	I0609 11:23:13.419672   22822 certs.go:389] hashing: -rw-r--r-- 1 root root 1338 Jun  9 18:19 /usr/share/ca-certificates/5469.pem
	I0609 11:23:13.419753   22822 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469.pem
	I0609 11:23:13.426854   22822 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5469.pem /etc/ssl/certs/51391683.0"
	I0609 11:23:13.436949   22822 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd ClusterName:multinode-20200609112134-5469 NodeName:multinode-20200609112134-5469-m03 DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.4"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0609 11:23:13.437115   22822 kubeadm.go:128] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20200609112134-5469-m03"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.18.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 172.17.0.2:10249
	
	I0609 11:23:13.437187   22822 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
	I0609 11:23:13.437166   22822 cache.go:92] acquiring lock: {Name:mkf4b3448425d401ea0fe30a83ad99c11c351925 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0609 11:23:13.437193   22822 cache.go:92] acquiring lock: {Name:mke98536499a4f6720a0176e6f6570186bd16443 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0609 11:23:13.437304   22822 cache.go:100] /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/busybox_latest exists
	I0609 11:23:13.437328   22822 cache.go:100] /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/k8s.gcr.io/pause_latest exists
	I0609 11:23:13.437330   22822 cache.go:81] cache image "busybox:latest" -> "/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/busybox_latest" took 153.925µs
	I0609 11:23:13.437350   22822 cache.go:66] save to tar file busybox:latest -> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/busybox_latest succeeded
	I0609 11:23:13.437359   22822 cache.go:81] cache image "k8s.gcr.io/pause:latest" -> "/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/k8s.gcr.io/pause_latest" took 215.957µs
	I0609 11:23:13.437373   22822 cache.go:66] save to tar file k8s.gcr.io/pause:latest -> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/k8s.gcr.io/pause_latest succeeded
	I0609 11:23:13.437393   22822 cache.go:73] Successfully saved all images to host disk.
	I0609 11:23:13.437569   22822 cli_runner.go:108] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
	I0609 11:23:13.497439   22822 cli_runner.go:108] Run: docker container inspect functional-20200609111957-5469 --format={{.State.Status}}
	I0609 11:23:13.511365   22822 kubeadm.go:755] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=multinode-20200609112134-5469-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --pod-manifest-path=/etc/kubernetes/manifests
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.3 ClusterName:multinode-20200609112134-5469 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
	I0609 11:23:13.511457   22822 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.3
	I0609 11:23:13.521468   22822 binaries.go:43] Found k8s binaries, skipping transfer
	I0609 11:23:13.521572   22822 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0609 11:23:13.531580   22822 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (557 bytes)
	I0609 11:23:13.557060   22822 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
	I0609 11:23:13.557922   22822 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 11:23:13.558006   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20200609111957-5469
	I0609 11:23:13.580279   22822 start.go:268] checking
	I0609 11:23:13.580346   22822 ssh_runner.go:148] Run: grep 172.17.0.4	control-plane.minikube.internal$ /etc/hosts
	I0609 11:23:13.584510   22822 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
	I0609 11:23:13.596389   22822 ssh_runner.go:148] Run: sudo systemctl daemon-reload
	I0609 11:23:13.623323   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/functional-20200609111957-5469/id_rsa Username:docker}
	I0609 11:23:13.659940   22822 ssh_runner.go:148] Run: sudo systemctl start kubelet
	I0609 11:23:13.672797   22822 host.go:65] Checking if "multinode-20200609112134-5469" exists ...
	I0609 11:23:13.673102   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm token create --print-join-command --ttl=0"
	I0609 11:23:13.673170   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469
	I0609 11:23:13.729296   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469/id_rsa Username:docker}
	I0609 11:23:13.770116   22822 docker.go:379] Got preloaded images: -- stdout --
	busybox:latest
	k8s.gcr.io/kube-proxy:v1.18.3
	k8s.gcr.io/kube-controller-manager:v1.18.3
	k8s.gcr.io/kube-apiserver:v1.18.3
	k8s.gcr.io/kube-scheduler:v1.18.3
	kubernetesui/dashboard:v2.0.0
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	busybox:1.28.4-glibc
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0609 11:23:13.770148   22822 cache_images.go:69] Images are preloaded, skipping loading
	I0609 11:23:13.770627   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469 --format={{.State.Status}}
	I0609 11:23:13.823865   22822 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 11:23:13.823913   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469
	I0609 11:23:13.881544   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469/id_rsa Username:docker}
	I0609 11:23:13.889888   22822 kubeadm.go:602] JoinCluster: {Name:multinode-20200609112134-5469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:multinode-20200609112134-5469 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.9
6.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true} {Name:m02 IP:172.17.0.5 Port:0 KubernetesVersion:v1.18.3 ControlPlane:false Worker:true} {Name:m03 IP:172.17.0.2 Port:0 KubernetesVersion:v1.18.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true system_pods:true]}
	I0609 11:23:13.890017   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset -f"
	I0609 11:23:14.152639   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03"
	I0609 11:23:14.152767   22822 docker.go:379] Got preloaded images: -- stdout --
	busybox:latest
	k8s.gcr.io/kube-proxy:v1.18.3
	k8s.gcr.io/kube-scheduler:v1.18.3
	k8s.gcr.io/kube-controller-manager:v1.18.3
	k8s.gcr.io/kube-apiserver:v1.18.3
	kubernetesui/dashboard:v2.0.0
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	kindest/kindnetd:0.5.4
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0609 11:23:14.152792   22822 cache_images.go:69] Images are preloaded, skipping loading
	I0609 11:23:14.153207   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m02 --format={{.State.Status}}
	I0609 11:23:14.217217   22822 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 11:23:14.217268   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m02
	I0609 11:23:14.277312   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32795 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m02/id_rsa Username:docker}
	I0609 11:23:14.578720   22822 docker.go:379] Got preloaded images: -- stdout --
	busybox:latest
	k8s.gcr.io/kube-proxy:v1.18.3
	k8s.gcr.io/kube-scheduler:v1.18.3
	k8s.gcr.io/kube-controller-manager:v1.18.3
	k8s.gcr.io/kube-apiserver:v1.18.3
	kubernetesui/dashboard:v2.0.0
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	kindest/kindnetd:0.5.4
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0609 11:23:14.578750   22822 cache_images.go:69] Images are preloaded, skipping loading
	I0609 11:23:14.579185   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m03 --format={{.State.Status}}
	I0609 11:23:14.633931   22822 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 11:23:14.634012   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
	I0609 11:23:14.689238   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m03/id_rsa Username:docker}
	I0609 11:23:14.834846   22822 docker.go:379] Got preloaded images: -- stdout --
	busybox:latest
	k8s.gcr.io/kube-proxy:v1.18.3
	k8s.gcr.io/kube-scheduler:v1.18.3
	k8s.gcr.io/kube-apiserver:v1.18.3
	k8s.gcr.io/kube-controller-manager:v1.18.3
	kubernetesui/dashboard:v2.0.0
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0609 11:23:14.834873   22822 cache_images.go:69] Images are preloaded, skipping loading
	I0609 11:23:14.834885   22822 cache_images.go:225] succeeded pushing to: functional-20200609111957-5469 multinode-20200609112134-5469 multinode-20200609112134-5469-m02 multinode-20200609112134-5469-m03
	I0609 11:23:14.834895   22822 cache_images.go:226] failed pushing to: 
	I0609 11:23:25.625553   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset -f"
	I0609 11:23:25.752796   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03"
	I0609 11:23:47.776490   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset -f"
	I0609 11:23:47.897857   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03"
	I0609 11:24:14.514030   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset -f"
	I0609 11:24:14.630732   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03"
	I0609 11:24:15.010241   22822 kubeadm.go:604] JoinCluster complete in 1m1.120361642s
	I0609 11:24:15.010333   22822 exit.go:58] WithError(failed to start node)=startup failed: joining cluster: joining cp: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03
	-- stdout --
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-12-amd64
	DOCKER_VERSION: 19.03.2
	DOCKER_GRAPH_DRIVER: overlay2
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
	
	-- /stdout --
	** stderr ** 
	W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	** /stderr **
	: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-12-amd64
	DOCKER_VERSION: 19.03.2
	DOCKER_GRAPH_DRIVER: overlay2
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
	
	stderr:
	W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	 called from:
	goroutine 1 [running]:
	runtime/debug.Stack(0xc0007bce00, 0x0, 0x0)
		/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
	k8s.io/minikube/pkg/minikube/exit.WithError(0x1ba2834, 0x14, 0x1e93220, 0xc00077f9e0)
		/app/pkg/minikube/exit/exit.go:58 +0x34
	k8s.io/minikube/cmd/minikube/cmd.glob..func17(0x2c6d780, 0xc00073e600, 0x1, 0x4)
		/app/cmd/minikube/cmd/node_start.go:73 +0x5c5
	github.com/spf13/cobra.(*Command).execute(0x2c6d780, 0xc00073e5c0, 0x4, 0x4, 0x2c6d780, 0xc00073e5c0)
		/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2aa
	github.com/spf13/cobra.(*Command).ExecuteC(0x2c6e4a0, 0x0, 0x1, 0xc000048400)
		/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349
	github.com/spf13/cobra.(*Command).Execute(...)
		/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
	k8s.io/minikube/cmd/minikube/cmd.Execute()
		/app/cmd/minikube/cmd/root.go:112 +0x747
	main.main()
		/app/cmd/minikube/main.go:71 +0x143
	W0609 11:24:15.012008   22822 out.go:201] failed to start node: startup failed: joining cluster: joining cp: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03
	-- stdout --
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-12-amd64
	DOCKER_VERSION: 19.03.2
	DOCKER_GRAPH_DRIVER: overlay2
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
	
	-- /stdout --
	** stderr ** 
	W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	** /stderr **
	: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-12-amd64
	DOCKER_VERSION: 19.03.2
	DOCKER_GRAPH_DRIVER: overlay2
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
	
	stderr:
	W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	* 
	X failed to start node: startup failed: joining cluster: joining cp: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03
	-- stdout --
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-12-amd64
	DOCKER_VERSION: 19.03.2
	DOCKER_GRAPH_DRIVER: overlay2
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
	
	-- /stdout --
	** stderr ** 
	W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	** /stderr **
	: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 4.9.0-12-amd64
	DOCKER_VERSION: 19.03.2
	DOCKER_GRAPH_DRIVER: overlay2
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
	
	stderr:
	W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
	error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
multinode_test.go:156: I0609 11:23:06.458064   22822 mustload.go:64] Loading cluster: multinode-20200609112134-5469
I0609 11:23:06.458890   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m03 --format={{.State.Status}}
W0609 11:23:06.517973   22822 host.go:57] "multinode-20200609112134-5469-m03" host status: Stopped
I0609 11:23:07.341613   22822 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0609 11:23:07.341651   22822 cache.go:113] gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 exists in daemon, skipping pull
I0609 11:23:07.341662   22822 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0609 11:23:07.341707   22822 preload.go:103] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
I0609 11:23:07.341740   22822 cache.go:51] Caching tarball of preloaded images
I0609 11:23:07.341754   22822 preload.go:129] Found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0609 11:23:07.341762   22822 cache.go:54] Finished verifying existence of preloaded tar for  v1.18.3 on docker
I0609 11:23:07.341882   22822 profile.go:156] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/multinode-20200609112134-5469/config.json ...
I0609 11:23:07.342133   22822 cache.go:178] Successfully downloaded all kic artifacts
I0609 11:23:07.342164   22822 start.go:240] acquiring machines lock for multinode-20200609112134-5469-m03: {Name:mk46463277e7c0d6daf1ecc8c78c462b84291a90 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0609 11:23:07.342387   22822 start.go:244] acquired machines lock for "multinode-20200609112134-5469-m03" in 196.804µs
I0609 11:23:07.342415   22822 start.go:88] Skipping create...Using existing machine configuration
I0609 11:23:07.342464   22822 fix.go:53] fixHost starting: m03
I0609 11:23:07.342827   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m03 --format={{.State.Status}}
I0609 11:23:07.399774   22822 fix.go:105] recreateIfNeeded on multinode-20200609112134-5469-m03: state=Stopped err=<nil>
W0609 11:23:07.399808   22822 fix.go:131] unexpected machine state, will restart: <nil>
I0609 11:23:07.404794   22822 cli_runner.go:108] Run: docker start multinode-20200609112134-5469-m03
I0609 11:23:07.852661   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m03 --format={{.State.Status}}
I0609 11:23:07.909825   22822 kic.go:318] container "multinode-20200609112134-5469-m03" state is running.
I0609 11:23:07.910394   22822 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20200609112134-5469-m03
I0609 11:23:07.974738   22822 profile.go:156] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/multinode-20200609112134-5469/config.json ...
I0609 11:23:07.975016   22822 machine.go:88] provisioning docker machine ...
I0609 11:23:07.975055   22822 ubuntu.go:166] provisioning hostname "multinode-20200609112134-5469-m03"
I0609 11:23:07.975116   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
I0609 11:23:08.038619   22822 main.go:115] libmachine: Using SSH client type: native
I0609 11:23:08.039016   22822 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
I0609 11:23:08.039055   22822 main.go:115] libmachine: About to run SSH command:
sudo hostname multinode-20200609112134-5469-m03 && echo "multinode-20200609112134-5469-m03" | sudo tee /etc/hostname
I0609 11:23:08.039859   22822 main.go:115] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42752->127.0.0.1:32803: read: connection reset by peer
I0609 11:23:11.187866   22822 main.go:115] libmachine: SSH cmd err, output: <nil>: multinode-20200609112134-5469-m03

                                                
                                                
I0609 11:23:11.187965   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
I0609 11:23:11.243471   22822 main.go:115] libmachine: Using SSH client type: native
I0609 11:23:11.243739   22822 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
I0609 11:23:11.243784   22822 main.go:115] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-20200609112134-5469-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20200609112134-5469-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-20200609112134-5469-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0609 11:23:11.367872   22822 main.go:115] libmachine: SSH cmd err, output: <nil>: 
I0609 11:23:11.367926   22822 ubuntu.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKe
yRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube}
I0609 11:23:11.367963   22822 ubuntu.go:174] setting up certificates
I0609 11:23:11.367976   22822 provision.go:82] configureAuth start
I0609 11:23:11.368083   22822 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20200609112134-5469-m03
I0609 11:23:11.423392   22822 provision.go:131] copyHostCerts
I0609 11:23:11.423475   22822 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cert.pem, removing ...
I0609 11:23:11.423544   22822 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cert.pem (1078 bytes)
I0609 11:23:11.423669   22822 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/key.pem, removing ...
I0609 11:23:11.423706   22822 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/key.pem (1679 bytes)
I0609 11:23:11.423805   22822 exec_runner.go:91] found /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.pem, removing ...
I0609 11:23:11.423841   22822 exec_runner.go:98] cp: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.pem (1038 bytes)
I0609 11:23:11.423910   22822 provision.go:105] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca-key.pem org=jenkins.multinode-20200609112134-5469-m03 san=[172.17.0.2 localhost 127.0.0.1]
I0609 11:23:11.749579   22822 provision.go:159] copyRemoteCerts
I0609 11:23:11.749657   22822 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0609 11:23:11.749713   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
I0609 11:23:11.804053   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m03/id_rsa Username:docker}
I0609 11:23:11.892202   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1038 bytes)
I0609 11:23:11.915277   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server.pem --> /etc/docker/server.pem (1155 bytes)
I0609 11:23:11.938640   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0609 11:23:11.960814   22822 provision.go:85] duration metric: configureAuth took 592.818718ms
I0609 11:23:11.960841   22822 ubuntu.go:190] setting minikube options for container-runtime
I0609 11:23:11.961113   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
I0609 11:23:12.016576   22822 main.go:115] libmachine: Using SSH client type: native
I0609 11:23:12.016800   22822 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
I0609 11:23:12.016817   22822 main.go:115] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0609 11:23:12.139602   22822 main.go:115] libmachine: SSH cmd err, output: <nil>: overlay

                                                
                                                
I0609 11:23:12.139639   22822 ubuntu.go:71] root file system type: overlay
I0609 11:23:12.139834   22822 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0609 11:23:12.139917   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
I0609 11:23:12.193968   22822 main.go:115] libmachine: Using SSH client type: native
I0609 11:23:12.194189   22822 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
I0609 11:23:12.194303   22822 main.go:115] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

                                                
                                                
[Service]
Type=notify

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0609 11:23:12.327844   22822 main.go:115] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

                                                
                                                
[Service]
Type=notify

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0609 11:23:12.327941   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
I0609 11:23:12.383146   22822 main.go:115] libmachine: Using SSH client type: native
I0609 11:23:12.383374   22822 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bfa80] 0x7bfa50 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
I0609 11:23:12.383404   22822 main.go:115] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0609 11:23:12.516233   22822 main.go:115] libmachine: SSH cmd err, output: <nil>: 
I0609 11:23:12.516269   22822 machine.go:91] provisioned docker machine in 4.541226375s
I0609 11:23:12.516283   22822 start.go:203] post-start starting for "multinode-20200609112134-5469-m03" (driver="docker")
I0609 11:23:12.516293   22822 start.go:213] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0609 11:23:12.516366   22822 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0609 11:23:12.516429   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
I0609 11:23:12.569916   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m03/id_rsa Username:docker}
I0609 11:23:12.660716   22822 ssh_runner.go:148] Run: cat /etc/os-release
I0609 11:23:12.664748   22822 main.go:115] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0609 11:23:12.664782   22822 main.go:115] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0609 11:23:12.664793   22822 main.go:115] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0609 11:23:12.664799   22822 info.go:96] Remote host: Ubuntu 19.10
I0609 11:23:12.664812   22822 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/addons for local assets ...
I0609 11:23:12.664868   22822 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/files for local assets ...
I0609 11:23:12.665012   22822 filesync.go:141] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/files/etc/test/nested/copy/5469/hosts -> hosts in /etc/test/nested/copy/5469
I0609 11:23:12.665059   22822 ssh_runner.go:148] Run: sudo mkdir -p /etc/test/nested/copy/5469
I0609 11:23:12.674626   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/files/etc/test/nested/copy/5469/hosts --> /etc/test/nested/copy/5469/hosts (40 bytes)
I0609 11:23:12.699199   22822 start.go:206] post-start completed in 182.896432ms
I0609 11:23:12.699233   22822 fix.go:55] fixHost completed within 5.356770312s
I0609 11:23:12.699242   22822 start.go:75] releasing machines lock for "multinode-20200609112134-5469-m03", held for 5.356837452s
I0609 11:23:12.699346   22822 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20200609112134-5469-m03
I0609 11:23:12.753742   22822 ssh_runner.go:148] Run: systemctl --version
I0609 11:23:12.753806   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
I0609 11:23:12.753898   22822 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0609 11:23:12.754098   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
I0609 11:23:12.811830   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m03/id_rsa Username:docker}
I0609 11:23:12.812910   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m03/id_rsa Username:docker}
I0609 11:23:12.916362   22822 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0609 11:23:12.931246   22822 cruntime.go:189] skipping containerd shutdown because we are bound to it
I0609 11:23:12.931324   22822 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0609 11:23:12.945355   22822 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0609 11:23:13.012495   22822 ssh_runner.go:148] Run: sudo systemctl start docker
I0609 11:23:13.025258   22822 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
I0609 11:23:13.096026   22822 cli_runner.go:108] Run: docker network ls --filter name=bridge --format {{.ID}}
I0609 11:23:13.149310   22822 cli_runner.go:108] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" 1fddf8d61680
I0609 11:23:13.202411   22822 network.go:77] got host ip for mount in container by inspect docker network: 172.17.0.1
I0609 11:23:13.202495   22822 start.go:268] checking
I0609 11:23:13.202555   22822 ssh_runner.go:148] Run: grep 172.17.0.1	host.minikube.internal$ /etc/hosts
I0609 11:23:13.207207   22822 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "172.17.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0609 11:23:13.219600   22822 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/profiles/multinode-20200609112134-5469 for IP: 172.17.0.2
I0609 11:23:13.219668   22822 certs.go:169] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.key
I0609 11:23:13.219687   22822 certs.go:169] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/proxy-client-ca.key
I0609 11:23:13.219775   22822 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/5469.pem (1338 bytes)
W0609 11:23:13.219826   22822 certs.go:344] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/5469_empty.pem, impossibly tiny 0 bytes
I0609 11:23:13.219840   22822 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca-key.pem (1679 bytes)
I0609 11:23:13.219878   22822 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/ca.pem (1038 bytes)
I0609 11:23:13.219936   22822 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/cert.pem (1078 bytes)
I0609 11:23:13.219962   22822 certs.go:348] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/key.pem (1679 bytes)
I0609 11:23:13.220987   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0609 11:23:13.244147   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0609 11:23:13.266882   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0609 11:23:13.291919   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0609 11:23:13.320280   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/certs/5469.pem --> /usr/share/ca-certificates/5469.pem (1338 bytes)
I0609 11:23:13.344294   22822 ssh_runner.go:215] scp /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0609 11:23:13.368778   22822 ssh_runner.go:148] Run: openssl version
I0609 11:23:13.375626   22822 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0609 11:23:13.385688   22822 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0609 11:23:13.390164   22822 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Jun  9 17:37 /usr/share/ca-certificates/minikubeCA.pem
I0609 11:23:13.390224   22822 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0609 11:23:13.396745   22822 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0609 11:23:13.405593   22822 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5469.pem && ln -fs /usr/share/ca-certificates/5469.pem /etc/ssl/certs/5469.pem"
I0609 11:23:13.415302   22822 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/5469.pem
I0609 11:23:13.419672   22822 certs.go:389] hashing: -rw-r--r-- 1 root root 1338 Jun  9 18:19 /usr/share/ca-certificates/5469.pem
I0609 11:23:13.419753   22822 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5469.pem
I0609 11:23:13.426854   22822 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5469.pem /etc/ssl/certs/51391683.0"
I0609 11:23:13.436949   22822 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd ClusterName:multinode-20200609112134-5469 NodeName:multinode-20200609112134-5469-m03 DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.4"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0609 11:23:13.437115   22822 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.0.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "multinode-20200609112134-5469-m03"
kubeletExtraArgs:
node-ip: 172.17.0.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "172.17.0.4"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 172.17.0.2:10249

                                                
                                                
I0609 11:23:13.437187   22822 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0609 11:23:13.437166   22822 cache.go:92] acquiring lock: {Name:mkf4b3448425d401ea0fe30a83ad99c11c351925 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0609 11:23:13.437193   22822 cache.go:92] acquiring lock: {Name:mke98536499a4f6720a0176e6f6570186bd16443 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0609 11:23:13.437304   22822 cache.go:100] /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/busybox_latest exists
I0609 11:23:13.437328   22822 cache.go:100] /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/k8s.gcr.io/pause_latest exists
I0609 11:23:13.437330   22822 cache.go:81] cache image "busybox:latest" -> "/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/busybox_latest" took 153.925µs
I0609 11:23:13.437350   22822 cache.go:66] save to tar file busybox:latest -> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/busybox_latest succeeded
I0609 11:23:13.437359   22822 cache.go:81] cache image "k8s.gcr.io/pause:latest" -> "/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/k8s.gcr.io/pause_latest" took 215.957µs
I0609 11:23:13.437373   22822 cache.go:66] save to tar file k8s.gcr.io/pause:latest -> /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/cache/images/k8s.gcr.io/pause_latest succeeded
I0609 11:23:13.437393   22822 cache.go:73] Successfully saved all images to host disk.
I0609 11:23:13.437569   22822 cli_runner.go:108] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
I0609 11:23:13.497439   22822 cli_runner.go:108] Run: docker container inspect functional-20200609111957-5469 --format={{.State.Status}}
I0609 11:23:13.511365   22822 kubeadm.go:755] kubelet [Unit]
Wants=docker.socket

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=multinode-20200609112134-5469-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --pod-manifest-path=/etc/kubernetes/manifests

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.18.3 ClusterName:multinode-20200609112134-5469 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0609 11:23:13.511457   22822 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.3
I0609 11:23:13.521468   22822 binaries.go:43] Found k8s binaries, skipping transfer
I0609 11:23:13.521572   22822 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0609 11:23:13.531580   22822 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (557 bytes)
I0609 11:23:13.557060   22822 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0609 11:23:13.557922   22822 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 11:23:13.558006   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20200609111957-5469
I0609 11:23:13.580279   22822 start.go:268] checking
I0609 11:23:13.580346   22822 ssh_runner.go:148] Run: grep 172.17.0.4	control-plane.minikube.internal$ /etc/hosts
I0609 11:23:13.584510   22822 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0609 11:23:13.596389   22822 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0609 11:23:13.623323   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/functional-20200609111957-5469/id_rsa Username:docker}
I0609 11:23:13.659940   22822 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0609 11:23:13.672797   22822 host.go:65] Checking if "multinode-20200609112134-5469" exists ...
I0609 11:23:13.673102   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm token create --print-join-command --ttl=0"
I0609 11:23:13.673170   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469
I0609 11:23:13.729296   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469/id_rsa Username:docker}
I0609 11:23:13.770116   22822 docker.go:379] Got preloaded images: -- stdout --
busybox:latest
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
busybox:1.28.4-glibc
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
k8s.gcr.io/pause:latest

                                                
                                                
-- /stdout --
I0609 11:23:13.770148   22822 cache_images.go:69] Images are preloaded, skipping loading
I0609 11:23:13.770627   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469 --format={{.State.Status}}
I0609 11:23:13.823865   22822 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 11:23:13.823913   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469
I0609 11:23:13.881544   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469/id_rsa Username:docker}
I0609 11:23:13.889888   22822 kubeadm.go:602] JoinCluster: {Name:multinode-20200609112134-5469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:multinode-20200609112134-5469 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96
.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true} {Name:m02 IP:172.17.0.5 Port:0 KubernetesVersion:v1.18.3 ControlPlane:false Worker:true} {Name:m03 IP:172.17.0.2 Port:0 KubernetesVersion:v1.18.3 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true system_pods:true]}
I0609 11:23:13.890017   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset -f"
I0609 11:23:14.152639   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03"
I0609 11:23:14.152767   22822 docker.go:379] Got preloaded images: -- stdout --
busybox:latest
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.4
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
k8s.gcr.io/pause:latest

                                                
                                                
-- /stdout --
I0609 11:23:14.152792   22822 cache_images.go:69] Images are preloaded, skipping loading
I0609 11:23:14.153207   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m02 --format={{.State.Status}}
I0609 11:23:14.217217   22822 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 11:23:14.217268   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m02
I0609 11:23:14.277312   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32795 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m02/id_rsa Username:docker}
I0609 11:23:14.578720   22822 docker.go:379] Got preloaded images: -- stdout --
busybox:latest
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
kindest/kindnetd:0.5.4
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
k8s.gcr.io/pause:latest

                                                
                                                
-- /stdout --
I0609 11:23:14.578750   22822 cache_images.go:69] Images are preloaded, skipping loading
I0609 11:23:14.579185   22822 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m03 --format={{.State.Status}}
I0609 11:23:14.633931   22822 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 11:23:14.634012   22822 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m03
I0609 11:23:14.689238   22822 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m03/id_rsa Username:docker}
I0609 11:23:14.834846   22822 docker.go:379] Got preloaded images: -- stdout --
busybox:latest
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
k8s.gcr.io/pause:latest

                                                
                                                
-- /stdout --
I0609 11:23:14.834873   22822 cache_images.go:69] Images are preloaded, skipping loading
I0609 11:23:14.834885   22822 cache_images.go:225] succeeded pushing to: functional-20200609111957-5469 multinode-20200609112134-5469 multinode-20200609112134-5469-m02 multinode-20200609112134-5469-m03
I0609 11:23:14.834895   22822 cache_images.go:226] failed pushing to: 
I0609 11:23:25.625553   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset -f"
I0609 11:23:25.752796   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03"
I0609 11:23:47.776490   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset -f"
I0609 11:23:47.897857   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03"
I0609 11:24:14.514030   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset -f"
I0609 11:24:14.630732   22822 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03"
I0609 11:24:15.010241   22822 kubeadm.go:604] JoinCluster complete in 1m1.120361642s
I0609 11:24:15.010333   22822 exit.go:58] WithError(failed to start node)=startup failed: joining cluster: joining cp: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03
-- stdout --
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.9.0-12-amd64
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

                                                
                                                
-- /stdout --
** stderr ** 
W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
** /stderr **
: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.9.0-12-amd64
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

                                                
                                                
stderr:
W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
called from:
goroutine 1 [running]:
runtime/debug.Stack(0xc0007bce00, 0x0, 0x0)
	/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1ba2834, 0x14, 0x1e93220, 0xc00077f9e0)
	/app/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.glob..func17(0x2c6d780, 0xc00073e600, 0x1, 0x4)
	/app/cmd/minikube/cmd/node_start.go:73 +0x5c5
github.com/spf13/cobra.(*Command).execute(0x2c6d780, 0xc00073e5c0, 0x4, 0x4, 0x2c6d780, 0xc00073e5c0)
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2c6e4a0, 0x0, 0x1, 0xc000048400)
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
	/app/cmd/minikube/cmd/root.go:112 +0x747
main.main()
	/app/cmd/minikube/main.go:71 +0x143
W0609 11:24:15.012008   22822 out.go:201] failed to start node: startup failed: joining cluster: joining cp: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03
-- stdout --
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.9.0-12-amd64
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

                                                
                                                
-- /stdout --
** stderr ** 
W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
** /stderr **
: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.9.0-12-amd64
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

                                                
                                                
stderr:
W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
* 
X failed to start node: startup failed: joining cluster: joining cp: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03
-- stdout --
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.9.0-12-amd64
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

                                                
                                                
-- /stdout --
** stderr ** 
W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
** /stderr **
: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm join control-plane.minikube.internal:8443 --token xlkgi8.irz6duup9o665b2e     --discovery-token-ca-cert-hash sha256:552ac9d9cc73cd6b819df4880d824c03a84f81028b36aa8de5bf057c34a0ed51 --ignore-preflight-errors=all --node-name=multinode-20200609112134-5469-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.9.0-12-amd64
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

                                                
                                                
stderr:
W0609 18:24:14.683047    1264 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.9.0-12-amd64\n", err: exit status 1
error execution phase kubelet-start: a Node with name "multinode-20200609112134-5469-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
* minikube is exiting due to an error. If the above message is not useful, open an issue:
- https://github.com/kubernetes/minikube/issues/new/choose
multinode_test.go:157: node start returned an error. args "out/minikube-linux-amd64 -p multinode-20200609112134-5469 node start m03 --alsologtostderr": exit status 70
multinode_test.go:161: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 status
multinode_test.go:161: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20200609112134-5469 status: exit status 2 (894.754077ms)

                                                
                                                
-- stdout --
	multinode-20200609112134-5469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20200609112134-5469-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20200609112134-5469-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:163: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-20200609112134-5469 status" : exit status 2
helpers.go:214: -----------------------post-mortem--------------------------------
helpers.go:222: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers.go:223: (dbg) Run:  docker inspect multinode-20200609112134-5469
helpers.go:227: (dbg) docker inspect multinode-20200609112134-5469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa",
	        "Created": "2020-06-09T18:21:35.901912908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 12226,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-06-09T18:21:36.489162081Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e6bc41c39dc48b2b472936db36aedb28527ce0f675ed1bc20d029125c9ccf578",
	        "ResolvConfPath": "/var/lib/docker/containers/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa/hosts",
	        "LogPath": "/var/lib/docker/containers/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa-json.log",
	        "Name": "/multinode-20200609112134-5469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20200609112134-5469:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a80949e3a62d15663d922c82fb78d9a89c2e63044757613442b03f4e61abfcbf-init/diff:/var/lib/docker/overlay2/842cfb80f5123bafae2466fc7efa639aa41e065f3255b19f9debf027ea5ee70f/diff:/var/lib/docker/overlay2/52955c8ec40656be74515789d00b745e87d9b7fef6138e7b17a5363a06dbcfa5/diff:/var/lib/docker/overlay2/03cddd8e08a064f361b14f4944cfb79c7f8046479d95520269069705f7ab0528/diff:/var/lib/docker/overlay2/c64285a2182b3e7c4c0b57464030adbef4778934f113881df08564634b1f6221/diff:/var/lib/docker/overlay2/90f13b458ed1b350c6216e1ace4dd61d3d2d9dfee23ffc01aa7c9bb98bd421f6/diff:/var/lib/docker/overlay2/fe1683c816f3c3398f9921579d07f6c594583c7c0e5afad822f05cb5888c1268/diff:/var/lib/docker/overlay2/10612719aad9c166640f8cee6edd67101fe099610e2f6c88fcb61b31af35fd9d/diff:/var/lib/docker/overlay2/7c4cc5926eeaa0fefbc7d4a40004d880251629462c856500bafda9daac74d118/diff:/var/lib/docker/overlay2/9aa9a9f3601aea1f46ee059e5089e93043b90fd2fd30e3cd2d15f9183becf2a5/diff:/var/lib/docker/overlay
2/5b620b7b826525fd3203105b70fc1df648dcf00d91b123f32977d15a9aa24d42/diff:/var/lib/docker/overlay2/430918b4b183807894e9422553842dab55b537cc61905b96da054e1bd70225c3/diff:/var/lib/docker/overlay2/487a49458a3b877836066ca9e28d566b97e11dcaeaaa3b2645fb4c57d9e4322f/diff:/var/lib/docker/overlay2/02a4aa873547c0f7358529bad7f6983f4ae79dda4704251d86f5cffd924ecc22/diff:/var/lib/docker/overlay2/57242607bb68a1205e6073d4d78984d3a8ca810645de93f0578d911ff171e91f/diff:/var/lib/docker/overlay2/f7b86afeb24318436caa8fb2ecc416589f3e02ddec1addf6f367987b50ec4671/diff:/var/lib/docker/overlay2/f18bbd9e4f03562d739288185addb9e977807f3f93d0637976cc612e9e703752/diff:/var/lib/docker/overlay2/4a3511ac2d9c89e7a38909f5646b9a5983e5fbd4b20269aa0a438365ac9d960a/diff:/var/lib/docker/overlay2/3a357f9db4e41d2c676e3426a10c5404f0d121c954ac8cae7b1d34babb42323e/diff:/var/lib/docker/overlay2/422f1db82f9e94b7c185a899dfd8d725528b6ffa7b344759697faeae9246dd79/diff:/var/lib/docker/overlay2/135303c7fde9f4ebf5c3b0dfd5d9bc4a70c2bd3d259345543f4b85328bf5afab/diff:/v
ar/lib/docker/overlay2/54798ffee37e6b1949e5e9cb69ea12f7d2fceb53b37445ea1739701a82bae4f3/diff:/var/lib/docker/overlay2/f0432ec26d1b881669832c1d9e9179a47fd26f19eb4ddfba1232f2c00b978c33/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a80949e3a62d15663d922c82fb78d9a89c2e63044757613442b03f4e61abfcbf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a80949e3a62d15663d922c82fb78d9a89c2e63044757613442b03f4e61abfcbf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a80949e3a62d15663d922c82fb78d9a89c2e63044757613442b03f4e61abfcbf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-20200609112134-5469",
	                "Source": "/var/lib/docker/volumes/multinode-20200609112134-5469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20200609112134-5469",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20200609112134-5469",
	                "name.minikube.sigs.k8s.io": "multinode-20200609112134-5469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afd0eeae5300b0cab6985c363197e5c635ef8eb6557fb3387e9f9fbf4d93d5e6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/afd0eeae5300",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "f7d94d4d3c9f8135756389e3523ddb9c0b935424a94d83290fa009317eee577e",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "1fddf8d61680b60b987eb147ce51d80fbf33310bf69844ebbd2f62729313f1ae",
	                    "EndpointID": "f7d94d4d3c9f8135756389e3523ddb9c0b935424a94d83290fa009317eee577e",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers.go:231: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20200609112134-5469 -n multinode-20200609112134-5469
helpers.go:236: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers.go:237: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers.go:239: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 logs -n 25
helpers.go:239: (dbg) Done: out/minikube-linux-amd64 -p multinode-20200609112134-5469 logs -n 25: (1.799449973s)
helpers.go:244: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Tue 2020-06-09 18:21:36 UTC, end at Tue 2020-06-09 18:24:17 UTC. --
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.295992255Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007977f0, READY" module=grpc
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.296894109Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.296918737Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.296936800Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.296952723Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.297035479Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000807100, CONNECTING" module=grpc
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.297050759Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.297442528Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000807100, READY" module=grpc
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.300652497Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.312684365Z" level=warning msg="Your kernel does not support swap memory limit"
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.312715030Z" level=warning msg="Your kernel does not support cgroup rt period"
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.312722756Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.312896071Z" level=info msg="Loading containers: start."
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.425096167Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.472954800Z" level=info msg="Loading containers: done."
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.507328223Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.507414555Z" level=info msg="Daemon has completed initialization"
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.524635452Z" level=info msg="API listen on [::]:2376"
	* Jun 09 18:21:42 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:21:42.524646827Z" level=info msg="API listen on /var/run/docker.sock"
	* Jun 09 18:21:42 multinode-20200609112134-5469 systemd[1]: Started Docker Application Container Engine.
	* Jun 09 18:22:22 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:22:22.042809729Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Jun 09 18:22:22 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:22:22.047418654Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Jun 09 18:22:28 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:22:28.182990620Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Jun 09 18:23:44 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:23:44.807558288Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Jun 09 18:23:44 multinode-20200609112134-5469 dockerd[348]: time="2020-06-09T18:23:44.807720968Z" level=warning msg="a8dd2a55e2a09775ce08ed542ccbe4aa1ab24c4d4e6b8c035047440e82a2ad6e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/a8dd2a55e2a09775ce08ed542ccbe4aa1ab24c4d4e6b8c035047440e82a2ad6e/mounts/shm, flags: 0x2: no such file or directory"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID
	* ee8718cef3dff       da26705ccb4b5                                                                              30 seconds ago       Running             kube-controller-manager   1                   5d1f9410e3296
	* 9caf8a846d393       kindest/kindnetd@sha256:b33085aafb18b652ce4b3b8c41dbf172dac8b62ffe016d26863f88e7f6bf1c98   About a minute ago   Running             kindnet-cni               0                   7cdbba2b9ecc2
	* a5f49f42dcd65       4689081edb103                                                                              About a minute ago   Running             storage-provisioner       0                   b3e9bc3bd9d48
	* 1c713dd9e092b       67da37a9a360e                                                                              About a minute ago   Running             coredns                   0                   51baf5ede9f46
	* a361ad6db7959       67da37a9a360e                                                                              About a minute ago   Running             coredns                   0                   7bf176c59f007
	* 6be55225fd448       3439b7546f29b                                                                              About a minute ago   Running             kube-proxy                0                   6600cadc861db
	* fe5c5564b6a7d       76216c34ed0c7                                                                              2 minutes ago        Running             kube-scheduler            0                   fd3d278b9d33d
	* 1637d1f68225a       303ce5db0e90d                                                                              2 minutes ago        Running             etcd                      0                   3dbf7f4a2d1e7
	* d4060bb4cff47       7e28efa976bd1                                                                              2 minutes ago        Running             kube-apiserver            0                   6a34e99b3a29f
	* a8dd2a55e2a09       da26705ccb4b5                                                                              2 minutes ago        Exited              kube-controller-manager   0                   5d1f9410e3296
	* 
	* ==> coredns [1c713dd9e092] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	* CoreDNS-1.6.7
	* linux/amd64, go1.13.6, da7f65b
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* I0609 18:22:52.942941       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.942109663 +0000 UTC m=+0.097909089) (total time: 30.000701863s):
	* Trace[2019727887]: [30.000701863s] [30.000701863s] END
	* E0609 18:22:52.943253       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:22:52.943340       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.942833181 +0000 UTC m=+0.098632573) (total time: 30.000438781s):
	* Trace[1427131847]: [30.000438781s] [30.000438781s] END
	* E0609 18:22:52.943366       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:22:52.943703       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.943128606 +0000 UTC m=+0.098928006) (total time: 30.000551532s):
	* Trace[939984059]: [30.000551532s] [30.000551532s] END
	* E0609 18:22:52.943760       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* 
	* ==> coredns [a361ad6db795] <==
	* I0609 18:22:52.856067       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.855291768 +0000 UTC m=+0.093347899) (total time: 30.000590736s):
	* Trace[2019727887]: [30.000590736s] [30.000590736s] END
	* E0609 18:22:52.856117       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:22:52.856176       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.855366805 +0000 UTC m=+0.093422959) (total time: 30.000560175s):
	* Trace[1427131847]: [30.000560175s] [30.000560175s] END
	* E0609 18:22:52.856193       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:22:52.856434       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.855355551 +0000 UTC m=+0.093411680) (total time: 30.001054642s):
	* Trace[939984059]: [30.001054642s] [30.001054642s] END
	* E0609 18:22:52.856453       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	* CoreDNS-1.6.7
	* linux/amd64, go1.13.6, da7f65b
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* 
	* ==> describe nodes <==
	* Name:               multinode-20200609112134-5469
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=multinode-20200609112134-5469
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=b72d7683536818416863536d77e7e628181d7fce
	*                     minikube.k8s.io/name=multinode-20200609112134-5469
	*                     minikube.k8s.io/updated_at=2020_06_09T11_21_59_0700
	*                     minikube.k8s.io/version=v1.11.0
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Tue, 09 Jun 2020 18:21:55 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  multinode-20200609112134-5469
	*   AcquireTime:     <unset>
	*   RenewTime:       Tue, 09 Jun 2020 18:24:11 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Tue, 09 Jun 2020 18:22:30 +0000   Tue, 09 Jun 2020 18:21:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Tue, 09 Jun 2020 18:22:30 +0000   Tue, 09 Jun 2020 18:21:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Tue, 09 Jun 2020 18:22:30 +0000   Tue, 09 Jun 2020 18:21:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Tue, 09 Jun 2020 18:22:30 +0000   Tue, 09 Jun 2020 18:22:09 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.4
	*   Hostname:    multinode-20200609112134-5469
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 347eed688d0d461d9e2661345f13441f
	*   System UUID:                94b56d30-a98d-4485-869b-9d805fe1b047
	*   Boot ID:                    64f3ac6d-30f2-41fc-bc23-3cf0dad66462
	*   Kernel Version:             4.9.0-12-amd64
	*   OS Image:                   Ubuntu 19.10
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.2
	*   Kubelet Version:            v1.18.3
	*   Kube-Proxy Version:         v1.18.3
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (9 in total)
	*   Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-66bff467f8-2ptph                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     117s
	*   kube-system                 coredns-66bff467f8-8lp4d                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     117s
	*   kube-system                 etcd-multinode-20200609112134-5469                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	*   kube-system                 kindnet-jq8cp                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      116s
	*   kube-system                 kube-apiserver-multinode-20200609112134-5469             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	*   kube-system                 kube-controller-manager-multinode-20200609112134-5469    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	*   kube-system                 kube-proxy-wcwvr                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	*   kube-system                 kube-scheduler-multinode-20200609112134-5469             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	*   kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                850m (10%)  100m (1%)
	*   memory             190Mi (0%)  390Mi (1%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                    From                                       Message
	*   ----    ------                   ----                   ----                                       -------
	*   Normal  NodeHasSufficientMemory  2m28s (x5 over 2m29s)  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2m28s (x4 over 2m29s)  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2m28s (x4 over 2m29s)  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasSufficientPID
	*   Normal  Starting                 2m18s                  kubelet, multinode-20200609112134-5469     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  2m18s                  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2m18s                  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2m18s                  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             2m18s                  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  2m17s                  kubelet, multinode-20200609112134-5469     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                2m8s                   kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeReady
	*   Normal  Starting                 115s                   kube-proxy, multinode-20200609112134-5469  Starting kube-proxy.
	* 
	* 
	* Name:               multinode-20200609112134-5469-m02
	* Roles:              <none>
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=multinode-20200609112134-5469-m02
	*                     kubernetes.io/os=linux
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Tue, 09 Jun 2020 18:22:36 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  multinode-20200609112134-5469-m02
	*   AcquireTime:     <unset>
	*   RenewTime:       Tue, 09 Jun 2020 18:24:08 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Tue, 09 Jun 2020 18:23:06 +0000   Tue, 09 Jun 2020 18:22:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Tue, 09 Jun 2020 18:23:06 +0000   Tue, 09 Jun 2020 18:22:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Tue, 09 Jun 2020 18:23:06 +0000   Tue, 09 Jun 2020 18:22:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Tue, 09 Jun 2020 18:23:06 +0000   Tue, 09 Jun 2020 18:22:46 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.5
	*   Hostname:    multinode-20200609112134-5469-m02
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 67a0c942c09e4e4ab0d2de0649d7333f
	*   System UUID:                a9fa822c-f7e0-4f19-89dc-3d1b1622c186
	*   Boot ID:                    64f3ac6d-30f2-41fc-bc23-3cf0dad66462
	*   Kernel Version:             4.9.0-12-amd64
	*   OS Image:                   Ubuntu 19.10
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.2
	*   Kubelet Version:            v1.18.3
	*   Kube-Proxy Version:         v1.18.3
	* PodCIDR:                      10.244.1.0/24
	* PodCIDRs:                     10.244.1.0/24
	* Non-terminated Pods:          (2 in total)
	*   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	*   kube-system                 kindnet-hf42h       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	*   kube-system                 kube-proxy-h2pgs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                100m (1%)  100m (1%)
	*   memory             50Mi (0%)  50Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                  From                                           Message
	*   ----    ------                   ----                 ----                                           -------
	*   Normal  Starting                 101s                 kubelet, multinode-20200609112134-5469-m02     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  101s (x2 over 101s)  kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    101s (x2 over 101s)  kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     101s (x2 over 101s)  kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  101s                 kubelet, multinode-20200609112134-5469-m02     Updated Node Allocatable limit across pods
	*   Normal  Starting                 99s                  kube-proxy, multinode-20200609112134-5469-m02  Starting kube-proxy.
	*   Normal  NodeReady                91s                  kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeReady
	* 
	* 
	* Name:               multinode-20200609112134-5469-m03
	* Roles:              <none>
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=multinode-20200609112134-5469-m03
	*                     kubernetes.io/os=linux
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Tue, 09 Jun 2020 18:22:51 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  multinode-20200609112134-5469-m03
	*   AcquireTime:     <unset>
	*   RenewTime:       Tue, 09 Jun 2020 18:23:02 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Tue, 09 Jun 2020 18:23:01 +0000   Tue, 09 Jun 2020 18:22:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Tue, 09 Jun 2020 18:23:01 +0000   Tue, 09 Jun 2020 18:22:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Tue, 09 Jun 2020 18:23:01 +0000   Tue, 09 Jun 2020 18:22:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Tue, 09 Jun 2020 18:23:01 +0000   Tue, 09 Jun 2020 18:23:01 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.6
	*   Hostname:    multinode-20200609112134-5469-m03
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 2978e76130cd4f979bf6877ff4937bb0
	*   System UUID:                ffaa34ed-1840-42b4-ab6c-4420418946f2
	*   Boot ID:                    64f3ac6d-30f2-41fc-bc23-3cf0dad66462
	*   Kernel Version:             4.9.0-12-amd64
	*   OS Image:                   Ubuntu 19.10
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.2
	*   Kubelet Version:            v1.18.3
	*   Kube-Proxy Version:         v1.18.3
	* PodCIDR:                      10.244.2.0/24
	* PodCIDRs:                     10.244.2.0/24
	* Non-terminated Pods:          (2 in total)
	*   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	*   kube-system                 kindnet-zbv8n       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      86s
	*   kube-system                 kube-proxy-ndttk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                100m (1%)  100m (1%)
	*   memory             50Mi (0%)  50Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                From                                           Message
	*   ----    ------                   ----               ----                                           -------
	*   Normal  Starting                 86s                kubelet, multinode-20200609112134-5469-m03     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  86s (x2 over 86s)  kubelet, multinode-20200609112134-5469-m03     Node multinode-20200609112134-5469-m03 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    86s (x2 over 86s)  kubelet, multinode-20200609112134-5469-m03     Node multinode-20200609112134-5469-m03 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     86s (x2 over 86s)  kubelet, multinode-20200609112134-5469-m03     Node multinode-20200609112134-5469-m03 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  86s                kubelet, multinode-20200609112134-5469-m03     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                76s                kubelet, multinode-20200609112134-5469-m03     Node multinode-20200609112134-5469-m03 status is now: NodeReady
	*   Normal  Starting                 75s                kube-proxy, multinode-20200609112134-5469-m03  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.085360] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	* [  +1.015671] i8042: Warning: Keylock active
	* [  +0.440086] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
	* [  +0.011675] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
	* [  +0.026654] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10
	* [  +0.029986] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
	* [  +3.127438] systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway.
	* [ +12.512952] vboxdrv: loading out-of-tree module taints kernel.
	* [  +0.284944] VBoxNetFlt: Successfully started.
	* [  +0.021543] VBoxNetAdp: Successfully started.
	* [Jun 9 17:37] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +14.156107] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +3.682657] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 17:38] IPv4: martian source 10.1.0.3 from 10.1.0.3, on dev mybridge
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff a6 15 2d b4 60 a1 08 06        ........-.`...
	* [  +0.006604] IPv4: martian source 10.1.0.2 from 10.1.0.2, on dev mybridge
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 0a 03 cb 6c cf ba 08 06        .........l....
	* [Jun 9 17:39] IPv4: martian source 10.1.0.2 from 10.1.0.2, on dev mybridge
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 26 1e c4 0d 86 16 08 06        ......&.......
	* [  +6.307972] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:19] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:21] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:22] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +14.385288] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:23] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [1637d1f68225] <==
	* 2020-06-09 18:23:40.088146 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:610" took too long (1.94161816s) to execute
	* 2020-06-09 18:23:40.088272 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (1.142741982s) to execute
	* 2020-06-09 18:23:40.088329 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:528" took too long (1.941485228s) to execute
	* 2020-06-09 18:23:40.088407 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (1.941376455s) to execute
	* 2020-06-09 18:23:41.527401 W | wal: sync duration of 1.436307693s, expected less than 1s
	* 2020-06-09 18:23:41.803986 W | etcdserver: request "header:<ID:912949356585000405 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.4\" mod_revision:699 > success:<request_put:<key:\"/registry/masterleases/172.17.0.4\" value_size:65 lease:912949356585000403 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.4\" > >>" with result "size:16" took too long (275.806232ms) to execute
	* 2020-06-09 18:23:41.804800 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:596" took too long (1.657943271s) to execute
	* 2020-06-09 18:23:41.804894 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:7" took too long (454.369401ms) to execute
	* 2020-06-09 18:23:41.804955 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:3 size:15480" took too long (1.646722812s) to execute
	* 2020-06-09 18:23:43.284576 W | wal: sync duration of 1.464113769s, expected less than 1s
	* WARNING: 2020/06/09 18:23:44 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* WARNING: 2020/06/09 18:23:44 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-06-09 18:23:46.868224 W | wal: sync duration of 2.1580741s, expected less than 1s
	* 2020-06-09 18:23:47.138330 W | etcdserver: request "header:<ID:912949356585000415 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-controller-manager\" mod_revision:712 > success:<request_put:<key:\"/registry/leases/kube-system/kube-controller-manager\" value_size:453 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-controller-manager\" > >>" with result "size:16" took too long (3.853402694s) to execute
	* 2020-06-09 18:23:47.138816 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:482" took too long (5.32528792s) to execute
	* 2020-06-09 18:23:47.138998 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "range_response_count:1 size:501" took too long (5.324032554s) to execute
	* 2020-06-09 18:23:47.139271 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (1.267701471s) to execute
	* 2020-06-09 18:23:47.139295 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:7" took too long (1.420950301s) to execute
	* 2020-06-09 18:23:47.139597 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (1.863547785s) to execute
	* 2020-06-09 18:23:47.139610 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (268.490249ms) to execute
	* 2020-06-09 18:23:47.139818 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (389.790913ms) to execute
	* 2020-06-09 18:23:47.139828 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:5" took too long (1.967408519s) to execute
	* 2020-06-09 18:23:47.139903 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:7" took too long (814.922297ms) to execute
	* 2020-06-09 18:23:47.140298 W | etcdserver: read-only range request "key:\"/registry/networkpolicies\" range_end:\"/registry/networkpoliciet\" count_only:true " with result "range_response_count:0 size:5" took too long (3.735640287s) to execute
	* 2020-06-09 18:23:47.140553 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:3 size:15480" took too long (3.684611271s) to execute
	* 
	* ==> kernel <==
	*  18:24:17 up  1:06,  0 users,  load average: 2.05, 1.85, 1.42
	* Linux multinode-20200609112134-5469 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 19.10"
	* 
	* ==> kube-apiserver [d4060bb4cff4] <==
	* Trace[1447411746]: [1.684192298s] [1.684046037s] Object stored in database
	* I0609 18:23:41.806988       1 trace.go:116] Trace[359983098]: "List etcd3" key:/minions,resourceVersion:,limit:0,continue: (started: 2020-06-09 18:23:40.15765032 +0000 UTC m=+109.911614351) (total time: 1.649302812s):
	* Trace[359983098]: [1.649302812s] [1.649302812s] END
	* I0609 18:23:41.807441       1 trace.go:116] Trace[1606118289]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.18.3 (linux/amd64) kubernetes/2e7996e/leader-election,client:172.17.0.4 (started: 2020-06-09 18:23:40.146111133 +0000 UTC m=+109.900075164) (total time: 1.661293847s):
	* Trace[1606118289]: [1.661241127s] [1.661200864s] About to write a response
	* I0609 18:23:41.807832       1 trace.go:116] Trace[944409988]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.4 (started: 2020-06-09 18:23:40.157626127 +0000 UTC m=+109.911590302) (total time: 1.650172705s):
	* Trace[944409988]: [1.649376589s] [1.649364088s] Listing from storage done
	* I0609 18:23:41.807911       1 trace.go:116] Trace[757492228]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-06-09 18:23:40.091108718 +0000 UTC m=+109.845072757) (total time: 1.716777612s):
	* Trace[757492228]: [1.716749067s] [1.716026679s] Transaction committed
	* I0609 18:23:41.807999       1 trace.go:116] Trace[1213411849]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/leader-election,client:172.17.0.4 (started: 2020-06-09 18:23:40.090900789 +0000 UTC m=+109.844864824) (total time: 1.717075634s):
	* Trace[1213411849]: [1.717030572s] [1.716898108s] Object stored in database
	* I0609 18:23:44.665404       1 trace.go:116] Trace[1599197335]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-06-09 18:23:41.812351307 +0000 UTC m=+111.566315347) (total time: 2.852991933s):
	* Trace[1599197335]: [2.852991933s] [2.852495728s] END
	* E0609 18:23:44.665495       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	* I0609 18:23:44.665804       1 trace.go:116] Trace[1458904044]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/leader-election,client:172.17.0.4 (started: 2020-06-09 18:23:41.812222554 +0000 UTC m=+111.566186586) (total time: 2.853545959s):
	* Trace[1458904044]: [2.853545959s] [2.853459233s] END
	* E0609 18:23:44.743329       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	* I0609 18:23:47.139439       1 trace.go:116] Trace[1740873478]: "Get" url:/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-06-09 18:23:41.813247365 +0000 UTC m=+111.567211507) (total time: 5.326132543s):
	* Trace[1740873478]: [5.326038632s] [5.326031283s] About to write a response
	* I0609 18:23:47.141278       1 trace.go:116] Trace[1906924573]: "List etcd3" key:/minions,resourceVersion:,limit:0,continue: (started: 2020-06-09 18:23:43.455260425 +0000 UTC m=+113.209224454) (total time: 3.685980428s):
	* Trace[1906924573]: [3.685980428s] [3.685980428s] END
	* I0609 18:23:47.141576       1 trace.go:116] Trace[1502929488]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.3 (linux/amd64) kubernetes/2e7996e/leader-election,client:172.17.0.4 (started: 2020-06-09 18:23:41.814614316 +0000 UTC m=+111.568578505) (total time: 5.326886428s):
	* Trace[1502929488]: [5.326814231s] [5.326793089s] About to write a response
	* I0609 18:23:47.142165       1 trace.go:116] Trace[635007614]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.17.0.5 (started: 2020-06-09 18:23:43.455238053 +0000 UTC m=+113.209202079) (total time: 3.686892122s):
	* Trace[635007614]: [3.686065538s] [3.686053562s] Listing from storage done
	* 
	* ==> kube-controller-manager [a8dd2a55e2a0] <==
	* I0609 18:22:20.596698       1 shared_informer.go:230] Caches are synced for stateful set 
	* I0609 18:22:20.596713       1 shared_informer.go:230] Caches are synced for daemon sets 
	* I0609 18:22:20.597592       1 shared_informer.go:230] Caches are synced for resource quota 
	* I0609 18:22:20.612376       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"58dfb109-af52-4a5a-ad6d-eb3478f15055", APIVersion:"apps/v1", ResourceVersion:"203", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-wcwvr
	* I0609 18:22:20.641647       1 shared_informer.go:230] Caches are synced for garbage collector 
	* E0609 18:22:20.743578       1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"58dfb109-af52-4a5a-ad6d-eb3478f15055", ResourceVersion:"203", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727323719, loc:(*time.Location)(0x6d09200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00172d9e0), FieldsType:"FieldsV1", FieldsV1:(
*v1.FieldsV1)(0xc00172da00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00172da20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolum
eSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0016a88c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPat
hVolumeSource)(0xc00172da40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Project
ed:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00172da60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.Azur
eFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00172daa0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Res
ourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00107fea0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00069f1
f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004492d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), Preem
ptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000098a88)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00069f298)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* I0609 18:22:21.038751       1 request.go:621] Throttling request took 1.042658112s, request: GET:https://control-plane.minikube.internal:8443/apis/admissionregistration.k8s.io/v1?timeout=32s
	* I0609 18:22:21.157249       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a03ac436-73db-469a-8572-0b1d63161656", APIVersion:"apps/v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-jq8cp
	* I0609 18:22:21.646531       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	* I0609 18:22:21.646658       1 shared_informer.go:230] Caches are synced for resource quota 
	* W0609 18:22:36.717002       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20200609112134-5469-m02" does not exist
	* I0609 18:22:36.723084       1 range_allocator.go:373] Set node multinode-20200609112134-5469-m02 PodCIDR to [10.244.1.0/24]
	* I0609 18:22:36.726284       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"58dfb109-af52-4a5a-ad6d-eb3478f15055", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-h2pgs
	* I0609 18:22:36.726319       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a03ac436-73db-469a-8572-0b1d63161656", APIVersion:"apps/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-hf42h
	* W0609 18:22:40.244846       1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200609112134-5469-m02. Assuming now as a timestamp.
	* I0609 18:22:40.245458       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200609112134-5469-m02", UID:"5854fad2-afb5-4130-a37d-3d994601c305", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200609112134-5469-m02 event: Registered Node multinode-20200609112134-5469-m02 in Controller
	* W0609 18:22:51.830936       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20200609112134-5469-m03" does not exist
	* I0609 18:22:51.836274       1 range_allocator.go:373] Set node multinode-20200609112134-5469-m03 PodCIDR to [10.244.2.0/24]
	* I0609 18:22:51.842215       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"58dfb109-af52-4a5a-ad6d-eb3478f15055", APIVersion:"apps/v1", ResourceVersion:"528", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-ndttk
	* I0609 18:22:51.849664       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a03ac436-73db-469a-8572-0b1d63161656", APIVersion:"apps/v1", ResourceVersion:"545", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-zbv8n
	* W0609 18:22:55.254742       1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200609112134-5469-m03. Assuming now as a timestamp.
	* I0609 18:22:55.254730       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200609112134-5469-m03", UID:"75a2c9bf-43b4-4580-acdd-aa669132c9d0", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200609112134-5469-m03 event: Registered Node multinode-20200609112134-5469-m03 in Controller
	* E0609 18:23:44.665252       1 leaderelection.go:356] Failed to update lock: Put https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s: context deadline exceeded
	* I0609 18:23:44.665336       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition
	* F0609 18:23:44.665431       1 controllermanager.go:279] leaderelection lost
	* 
	* ==> kube-controller-manager [ee8718cef3df] <==
	* I0609 18:24:17.650411       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
	* I0609 18:24:17.650504       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
	* I0609 18:24:17.650723       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
	* I0609 18:24:17.650800       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
	* I0609 18:24:17.650850       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
	* I0609 18:24:17.650888       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	* I0609 18:24:17.650940       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
	* I0609 18:24:17.651023       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	* I0609 18:24:17.651062       1 controllermanager.go:533] Started "resourcequota"
	* I0609 18:24:17.651124       1 resource_quota_controller.go:272] Starting resource quota controller
	* I0609 18:24:17.651146       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	* I0609 18:24:17.651187       1 resource_quota_monitor.go:303] QuotaMonitor running
	* I0609 18:24:17.658968       1 node_lifecycle_controller.go:384] Sending events to api server.
	* I0609 18:24:17.659238       1 taint_manager.go:163] Sending events to api server.
	* I0609 18:24:17.659330       1 node_lifecycle_controller.go:512] Controller will reconcile labels.
	* I0609 18:24:17.659368       1 controllermanager.go:533] Started "nodelifecycle"
	* I0609 18:24:17.659409       1 node_lifecycle_controller.go:546] Starting node controller
	* I0609 18:24:17.659424       1 shared_informer.go:223] Waiting for caches to sync for taint
	* I0609 18:24:17.696779       1 controllermanager.go:533] Started "persistentvolume-binder"
	* W0609 18:24:17.696809       1 controllermanager.go:525] Skipping "root-ca-cert-publisher"
	* I0609 18:24:17.696876       1 pv_controller_base.go:295] Starting persistent volume controller
	* I0609 18:24:17.696942       1 shared_informer.go:223] Waiting for caches to sync for persistent volume
	* I0609 18:24:17.847154       1 controllermanager.go:533] Started "podgc"
	* I0609 18:24:17.847258       1 gc_controller.go:89] Starting GC controller
	* I0609 18:24:17.847275       1 shared_informer.go:223] Waiting for caches to sync for GC
	* 
	* ==> kube-proxy [6be55225fd44] <==
	* W0609 18:22:22.755750       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	* I0609 18:22:22.776536       1 node.go:136] Successfully retrieved node IP: 172.17.0.4
	* I0609 18:22:22.776577       1 server_others.go:186] Using iptables Proxier.
	* I0609 18:22:22.839199       1 server.go:583] Version: v1.18.3
	* I0609 18:22:22.839911       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I0609 18:22:22.840039       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0609 18:22:22.840115       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0609 18:22:22.840467       1 config.go:133] Starting endpoints config controller
	* I0609 18:22:22.840496       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	* I0609 18:22:22.845212       1 config.go:315] Starting service config controller
	* I0609 18:22:22.845565       1 shared_informer.go:223] Waiting for caches to sync for service config
	* I0609 18:22:22.943154       1 shared_informer.go:230] Caches are synced for endpoints config 
	* I0609 18:22:22.945853       1 shared_informer.go:230] Caches are synced for service config 
	* 
	* ==> kube-scheduler [fe5c5564b6a7] <==
	* I0609 18:21:55.841627       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	* W0609 18:21:55.844511       1 authorization.go:47] Authorization is disabled
	* W0609 18:21:55.844620       1 authentication.go:40] Authentication is disabled
	* I0609 18:21:55.844651       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	* I0609 18:21:55.848222       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0609 18:21:55.848245       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0609 18:21:55.849666       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	* I0609 18:21:55.849772       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E0609 18:21:55.855569       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0609 18:21:55.855938       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0609 18:21:55.855970       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0609 18:21:55.856056       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0609 18:21:55.856088       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0609 18:21:55.856164       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0609 18:21:55.856249       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0609 18:21:55.856315       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0609 18:21:55.856417       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0609 18:21:56.760017       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0609 18:21:56.842674       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0609 18:21:56.915698       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0609 18:21:56.962538       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0609 18:21:56.988851       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* I0609 18:21:59.548530       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* I0609 18:21:59.650519       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
	* I0609 18:21:59.742662       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2020-06-09 18:21:36 UTC, end at Tue 2020-06-09 18:24:18 UTC. --
	* Jun 09 18:22:20 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:20.540897    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/31cce803-6b29-42aa-a42d-f4fbd2c0fac2-config-volume") pod "coredns-66bff467f8-8lp4d" (UID: "31cce803-6b29-42aa-a42d-f4fbd2c0fac2")
	* Jun 09 18:22:20 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:20.540974    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c71f64ab-0d1d-4fbb-9f6d-121c82083dca-config-volume") pod "coredns-66bff467f8-2ptph" (UID: "c71f64ab-0d1d-4fbb-9f6d-121c82083dca")
	* Jun 09 18:22:20 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:20.541023    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-c9nkp" (UniqueName: "kubernetes.io/secret/c71f64ab-0d1d-4fbb-9f6d-121c82083dca-coredns-token-c9nkp") pod "coredns-66bff467f8-2ptph" (UID: "c71f64ab-0d1d-4fbb-9f6d-121c82083dca")
	* Jun 09 18:22:20 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:20.541062    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-c9nkp" (UniqueName: "kubernetes.io/secret/31cce803-6b29-42aa-a42d-f4fbd2c0fac2-coredns-token-c9nkp") pod "coredns-66bff467f8-8lp4d" (UID: "31cce803-6b29-42aa-a42d-f4fbd2c0fac2")
	* Jun 09 18:22:20 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:20.638991    2210 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Jun 09 18:22:20 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:20.741849    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-rglfc" (UniqueName: "kubernetes.io/secret/377a3940-10dc-4343-9849-8808b3361a0f-kube-proxy-token-rglfc") pod "kube-proxy-wcwvr" (UID: "377a3940-10dc-4343-9849-8808b3361a0f")
	* Jun 09 18:22:20 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:20.742295    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/377a3940-10dc-4343-9849-8808b3361a0f-lib-modules") pod "kube-proxy-wcwvr" (UID: "377a3940-10dc-4343-9849-8808b3361a0f")
	* Jun 09 18:22:20 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:20.742596    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/377a3940-10dc-4343-9849-8808b3361a0f-kube-proxy") pod "kube-proxy-wcwvr" (UID: "377a3940-10dc-4343-9849-8808b3361a0f")
	* Jun 09 18:22:20 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:20.742645    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/377a3940-10dc-4343-9849-8808b3361a0f-xtables-lock") pod "kube-proxy-wcwvr" (UID: "377a3940-10dc-4343-9849-8808b3361a0f")
	* Jun 09 18:22:21 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:21.172152    2210 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Jun 09 18:22:21 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:21.245917    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/f4413252-81e4-4d62-843d-38b4ac459bc0-cni-cfg") pod "kindnet-jq8cp" (UID: "f4413252-81e4-4d62-843d-38b4ac459bc0")
	* Jun 09 18:22:21 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:21.245985    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/f4413252-81e4-4d62-843d-38b4ac459bc0-xtables-lock") pod "kindnet-jq8cp" (UID: "f4413252-81e4-4d62-843d-38b4ac459bc0")
	* Jun 09 18:22:21 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:21.246017    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/f4413252-81e4-4d62-843d-38b4ac459bc0-lib-modules") pod "kindnet-jq8cp" (UID: "f4413252-81e4-4d62-843d-38b4ac459bc0")
	* Jun 09 18:22:21 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:21.246115    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-h9nf8" (UniqueName: "kubernetes.io/secret/f4413252-81e4-4d62-843d-38b4ac459bc0-kindnet-token-h9nf8") pod "kindnet-jq8cp" (UID: "f4413252-81e4-4d62-843d-38b4ac459bc0")
	* Jun 09 18:22:21 multinode-20200609112134-5469 kubelet[2210]: W0609 18:22:21.948140    2210 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-2ptph through plugin: invalid network status for
	* Jun 09 18:22:21 multinode-20200609112134-5469 kubelet[2210]: W0609 18:22:21.953615    2210 pod_container_deletor.go:77] Container "7bf176c59f0074f012e61f62e98b61961b69e9b647f824e44a245347234deb2d" not found in pod's containers
	* Jun 09 18:22:21 multinode-20200609112134-5469 kubelet[2210]: W0609 18:22:21.959082    2210 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-8lp4d through plugin: invalid network status for
	* Jun 09 18:22:21 multinode-20200609112134-5469 kubelet[2210]: W0609 18:22:21.961483    2210 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-2ptph through plugin: invalid network status for
	* Jun 09 18:22:21 multinode-20200609112134-5469 kubelet[2210]: W0609 18:22:21.965351    2210 pod_container_deletor.go:77] Container "51baf5ede9f460b0c20c0ec391c5073a1c8637751ea43e4aee8dd632ee54474a" not found in pod's containers
	* Jun 09 18:22:22 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:22.675522    2210 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Jun 09 18:22:22 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:22.755815    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-hhxgl" (UniqueName: "kubernetes.io/secret/ad7f51dd-d358-4e33-bada-06ae37019d42-storage-provisioner-token-hhxgl") pod "storage-provisioner" (UID: "ad7f51dd-d358-4e33-bada-06ae37019d42")
	* Jun 09 18:22:22 multinode-20200609112134-5469 kubelet[2210]: I0609 18:22:22.755933    2210 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/ad7f51dd-d358-4e33-bada-06ae37019d42-tmp") pod "storage-provisioner" (UID: "ad7f51dd-d358-4e33-bada-06ae37019d42")
	* Jun 09 18:22:23 multinode-20200609112134-5469 kubelet[2210]: W0609 18:22:23.140227    2210 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-8lp4d through plugin: invalid network status for
	* Jun 09 18:22:23 multinode-20200609112134-5469 kubelet[2210]: W0609 18:22:23.158157    2210 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-2ptph through plugin: invalid network status for
	* Jun 09 18:23:47 multinode-20200609112134-5469 kubelet[2210]: I0609 18:23:47.189041    2210 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a8dd2a55e2a09775ce08ed542ccbe4aa1ab24c4d4e6b8c035047440e82a2ad6e
	* 
	* ==> storage-provisioner [a5f49f42dcd6] <==

                                                
                                                
-- /stdout --
helpers.go:246: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20200609112134-5469 -n multinode-20200609112134-5469
helpers.go:253: (dbg) Run:  kubectl --context multinode-20200609112134-5469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers.go:259: non-running pods: kindnet-zbv8n
helpers.go:261: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: describe non-running pods <======
helpers.go:264: (dbg) Run:  kubectl --context multinode-20200609112134-5469 describe pod kindnet-zbv8n
helpers.go:264: (dbg) Non-zero exit: kubectl --context multinode-20200609112134-5469 describe pod kindnet-zbv8n: exit status 1 (158.54209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kindnet-zbv8n" not found

                                                
                                                
** /stderr **
helpers.go:266: kubectl --context multinode-20200609112134-5469 describe pod kindnet-zbv8n: exit status 1

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (18.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
--- FAIL: TestFunctional/parallel/DockerEnv (18.46s)
functional_test.go:167: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20200609111957-5469 docker-env) && out/minikube-linux-amd64 status -p functional-20200609111957-5469"
functional_test.go:167: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20200609111957-5469 docker-env) && out/minikube-linux-amd64 status -p functional-20200609111957-5469": exit status 2 (6.632957773s)

                                                
                                                
-- stdout --
	functional-20200609111957-5469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Error
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0609 11:33:05.269841    6877 status.go:256] Error apiserver status: https://172.17.0.3:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
functional_test.go:173: failed to do status after eval-ing docker-env. error: exit status 2
helpers.go:214: -----------------------post-mortem--------------------------------
helpers.go:222: ======>  post-mortem[TestFunctional/parallel/DockerEnv]: docker inspect <======
helpers.go:223: (dbg) Run:  docker inspect functional-20200609111957-5469
helpers.go:227: (dbg) docker inspect functional-20200609111957-5469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a63678aaaf4423eabfa474c6501f136a1a54f060695e4465d2497d9d2dd28ef",
	        "Created": "2020-06-09T18:19:58.994558041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5029,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-06-09T18:19:59.595898469Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e6bc41c39dc48b2b472936db36aedb28527ce0f675ed1bc20d029125c9ccf578",
	        "ResolvConfPath": "/var/lib/docker/containers/9a63678aaaf4423eabfa474c6501f136a1a54f060695e4465d2497d9d2dd28ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a63678aaaf4423eabfa474c6501f136a1a54f060695e4465d2497d9d2dd28ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a63678aaaf4423eabfa474c6501f136a1a54f060695e4465d2497d9d2dd28ef/hosts",
	        "LogPath": "/var/lib/docker/containers/9a63678aaaf4423eabfa474c6501f136a1a54f060695e4465d2497d9d2dd28ef/9a63678aaaf4423eabfa474c6501f136a1a54f060695e4465d2497d9d2dd28ef-json.log",
	        "Name": "/functional-20200609111957-5469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20200609111957-5469:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2936012800,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a21bdc81915e7c4954abd4804f31a38050e1c62661cdd5b2621ec4d3f7952a0d-init/diff:/var/lib/docker/overlay2/842cfb80f5123bafae2466fc7efa639aa41e065f3255b19f9debf027ea5ee70f/diff:/var/lib/docker/overlay2/52955c8ec40656be74515789d00b745e87d9b7fef6138e7b17a5363a06dbcfa5/diff:/var/lib/docker/overlay2/03cddd8e08a064f361b14f4944cfb79c7f8046479d95520269069705f7ab0528/diff:/var/lib/docker/overlay2/c64285a2182b3e7c4c0b57464030adbef4778934f113881df08564634b1f6221/diff:/var/lib/docker/overlay2/90f13b458ed1b350c6216e1ace4dd61d3d2d9dfee23ffc01aa7c9bb98bd421f6/diff:/var/lib/docker/overlay2/fe1683c816f3c3398f9921579d07f6c594583c7c0e5afad822f05cb5888c1268/diff:/var/lib/docker/overlay2/10612719aad9c166640f8cee6edd67101fe099610e2f6c88fcb61b31af35fd9d/diff:/var/lib/docker/overlay2/7c4cc5926eeaa0fefbc7d4a40004d880251629462c856500bafda9daac74d118/diff:/var/lib/docker/overlay2/9aa9a9f3601aea1f46ee059e5089e93043b90fd2fd30e3cd2d15f9183becf2a5/diff:/var/lib/docker/overlay
2/5b620b7b826525fd3203105b70fc1df648dcf00d91b123f32977d15a9aa24d42/diff:/var/lib/docker/overlay2/430918b4b183807894e9422553842dab55b537cc61905b96da054e1bd70225c3/diff:/var/lib/docker/overlay2/487a49458a3b877836066ca9e28d566b97e11dcaeaaa3b2645fb4c57d9e4322f/diff:/var/lib/docker/overlay2/02a4aa873547c0f7358529bad7f6983f4ae79dda4704251d86f5cffd924ecc22/diff:/var/lib/docker/overlay2/57242607bb68a1205e6073d4d78984d3a8ca810645de93f0578d911ff171e91f/diff:/var/lib/docker/overlay2/f7b86afeb24318436caa8fb2ecc416589f3e02ddec1addf6f367987b50ec4671/diff:/var/lib/docker/overlay2/f18bbd9e4f03562d739288185addb9e977807f3f93d0637976cc612e9e703752/diff:/var/lib/docker/overlay2/4a3511ac2d9c89e7a38909f5646b9a5983e5fbd4b20269aa0a438365ac9d960a/diff:/var/lib/docker/overlay2/3a357f9db4e41d2c676e3426a10c5404f0d121c954ac8cae7b1d34babb42323e/diff:/var/lib/docker/overlay2/422f1db82f9e94b7c185a899dfd8d725528b6ffa7b344759697faeae9246dd79/diff:/var/lib/docker/overlay2/135303c7fde9f4ebf5c3b0dfd5d9bc4a70c2bd3d259345543f4b85328bf5afab/diff:/v
ar/lib/docker/overlay2/54798ffee37e6b1949e5e9cb69ea12f7d2fceb53b37445ea1739701a82bae4f3/diff:/var/lib/docker/overlay2/f0432ec26d1b881669832c1d9e9179a47fd26f19eb4ddfba1232f2c00b978c33/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a21bdc81915e7c4954abd4804f31a38050e1c62661cdd5b2621ec4d3f7952a0d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a21bdc81915e7c4954abd4804f31a38050e1c62661cdd5b2621ec4d3f7952a0d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a21bdc81915e7c4954abd4804f31a38050e1c62661cdd5b2621ec4d3f7952a0d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20200609111957-5469",
	                "Source": "/var/lib/docker/volumes/functional-20200609111957-5469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20200609111957-5469",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20200609111957-5469",
	                "name.minikube.sigs.k8s.io": "functional-20200609111957-5469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96b0fb4246d6f6544888e48eb9ca103ebd42ea464f25eef1827cc9ecde55b6be",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/96b0fb4246d6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "0837793e5c62ae12d6877c49c2f95634f402a173462e09d8339ccb1119816149",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "1fddf8d61680b60b987eb147ce51d80fbf33310bf69844ebbd2f62729313f1ae",
	                    "EndpointID": "0837793e5c62ae12d6877c49c2f95634f402a173462e09d8339ccb1119816149",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers.go:231: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20200609111957-5469 -n functional-20200609111957-5469
helpers.go:231: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-20200609111957-5469 -n functional-20200609111957-5469: exit status 2 (3.626909919s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0609 11:33:07.932504    8042 status.go:256] Error apiserver status: https://172.17.0.3:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers.go:231: status error: exit status 2 (may be ok)
helpers.go:236: <<< TestFunctional/parallel/DockerEnv FAILED: start of post-mortem logs <<<
helpers.go:237: ======>  post-mortem[TestFunctional/parallel/DockerEnv]: minikube logs <======
helpers.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 logs -n 25
helpers.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20200609111957-5469 logs -n 25: exit status 69 (8.096587659s)

                                                
                                                
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Tue 2020-06-09 18:20:00 UTC, end at Tue 2020-06-09 18:33:14 UTC. --
	* Jun 09 18:20:05 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:20:05.292677560Z" level=warning msg="Your kernel does not support cgroup rt runtime"
	* Jun 09 18:20:05 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:20:05.292857152Z" level=info msg="Loading containers: start."
	* Jun 09 18:20:05 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:20:05.404847061Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Jun 09 18:20:05 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:20:05.447845142Z" level=info msg="Loading containers: done."
	* Jun 09 18:20:05 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:20:05.487435719Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
	* Jun 09 18:20:05 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:20:05.487528700Z" level=info msg="Daemon has completed initialization"
	* Jun 09 18:20:05 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:20:05.503223335Z" level=info msg="API listen on /var/run/docker.sock"
	* Jun 09 18:20:05 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:20:05.503282543Z" level=info msg="API listen on [::]:2376"
	* Jun 09 18:20:05 functional-20200609111957-5469 systemd[1]: Started Docker Application Container Engine.
	* Jun 09 18:20:43 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:20:43.468646938Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Jun 09 18:20:43 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:20:43.468796679Z" level=warning msg="833b483d3de7360b2c5c10ba491084d78ae4884bdc9ba58e11135ee78e486bd8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/833b483d3de7360b2c5c10ba491084d78ae4884bdc9ba58e11135ee78e486bd8/mounts/shm, flags: 0x2: no such file or directory"
	* Jun 09 18:21:20 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:21:20.853794962Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Jun 09 18:21:20 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:21:20.882690445Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Jun 09 18:30:33 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:30:33.341132396Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Jun 09 18:30:33 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:30:33.341291110Z" level=warning msg="4a334cc3e5afc000b6023aecb7d1a2754eb8f9e901f621064b41548ccf03b785 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4a334cc3e5afc000b6023aecb7d1a2754eb8f9e901f621064b41548ccf03b785/mounts/shm, flags: 0x2: no such file or directory"
	* Jun 09 18:30:37 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:30:37.428536956Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Jun 09 18:30:37 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:30:37.428641448Z" level=warning msg="b15791ec9abccd003898f05a44f7c1680be94814f206ca9daa3742e648229423 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/b15791ec9abccd003898f05a44f7c1680be94814f206ca9daa3742e648229423/mounts/shm, flags: 0x2: no such file or directory"
	* Jun 09 18:32:22 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:32:22.954472991Z" level=warning msg="5357c47b58c38fce32b70042b078be03a74096c6d6cfa1ba28f348ab345cceef cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5357c47b58c38fce32b70042b078be03a74096c6d6cfa1ba28f348ab345cceef/mounts/shm, flags: 0x2: no such file or directory"
	* Jun 09 18:32:22 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:32:22.954570802Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Jun 09 18:32:22 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:32:22.954613986Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Jun 09 18:32:22 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:32:22.954753693Z" level=warning msg="c7b503b56b180973ed397c01e61d10e30f04674fdae1a42d616472c898d71cc8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c7b503b56b180973ed397c01e61d10e30f04674fdae1a42d616472c898d71cc8/mounts/shm, flags: 0x2: no such file or directory"
	* Jun 09 18:33:10 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:33:10.336566758Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Jun 09 18:33:10 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:33:10.336782263Z" level=warning msg="312d0c751e5845844acaa9af7ba6a07d6f099dba5076a2852459e4f0b84294b8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/312d0c751e5845844acaa9af7ba6a07d6f099dba5076a2852459e4f0b84294b8/mounts/shm, flags: 0x2: no such file or directory"
	* Jun 09 18:33:13 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:33:13.467471182Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Jun 09 18:33:13 functional-20200609111957-5469 dockerd[347]: time="2020-06-09T18:33:13.467708472Z" level=warning msg="5f3f0572459a8d972cd5b79ae2f36c01651a7b6a8d7337c6c10f15b20497a462 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5f3f0572459a8d972cd5b79ae2f36c01651a7b6a8d7337c6c10f15b20497a462/mounts/shm, flags: 0x2: no such file or directory"
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 5f3f0572459a8       76216c34ed0c7       37 seconds ago      Exited              kube-scheduler            2                   bb5c227e30a4b
	* 312d0c751e584       da26705ccb4b5       41 seconds ago      Exited              kube-controller-manager   3                   29e7093d14a79
	* 5357c47b58c38       da26705ccb4b5       2 minutes ago       Exited              kube-controller-manager   2                   29e7093d14a79
	* c7b503b56b180       76216c34ed0c7       2 minutes ago       Exited              kube-scheduler            1                   bb5c227e30a4b
	* 4e78755325b0e       67da37a9a360e       11 minutes ago      Running             coredns                   0                   3386630d060f5
	* 8fa2b1b427b66       67da37a9a360e       11 minutes ago      Running             coredns                   0                   5a07dfa3353c4
	* 5c660236666a5       4689081edb103       11 minutes ago      Running             storage-provisioner       0                   89173d6278470
	* ed4b4330fb0ed       3439b7546f29b       11 minutes ago      Running             kube-proxy                0                   ce2ec379393b3
	* 3212659b985a4       303ce5db0e90d       13 minutes ago      Running             etcd                      0                   174a168a56be8
	* 5a206fafe57fa       7e28efa976bd1       13 minutes ago      Running             kube-apiserver            0                   28ef40b4af243
	* 
	* ==> coredns [4e78755325b0] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	* CoreDNS-1.6.7
	* linux/amd64, go1.13.6, da7f65b
	* 
	* ==> coredns [8fa2b1b427b6] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	* CoreDNS-1.6.7
	* linux/amd64, go1.13.6, da7f65b
	* 
	* ==> describe nodes <==
	* Name:               functional-20200609111957-5469
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=functional-20200609111957-5469
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=b72d7683536818416863536d77e7e628181d7fce
	*                     minikube.k8s.io/name=functional-20200609111957-5469
	*                     minikube.k8s.io/updated_at=2020_06_09T11_20_22_0700
	*                     minikube.k8s.io/version=v1.11.0
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Tue, 09 Jun 2020 18:20:19 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  functional-20200609111957-5469
	*   AcquireTime:     <unset>
	*   RenewTime:       Tue, 09 Jun 2020 18:33:12 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Tue, 09 Jun 2020 18:31:55 +0000   Tue, 09 Jun 2020 18:20:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Tue, 09 Jun 2020 18:31:55 +0000   Tue, 09 Jun 2020 18:20:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Tue, 09 Jun 2020 18:31:55 +0000   Tue, 09 Jun 2020 18:20:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Tue, 09 Jun 2020 18:31:55 +0000   Tue, 09 Jun 2020 18:20:32 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.3
	*   Hostname:    functional-20200609111957-5469
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 7d408960001c4f278f529f8fd1209351
	*   System UUID:                c8581dfd-f4e2-4bab-864b-fa50ecabad20
	*   Boot ID:                    64f3ac6d-30f2-41fc-bc23-3cf0dad66462
	*   Kernel Version:             4.9.0-12-amd64
	*   OS Image:                   Ubuntu 19.10
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.2
	*   Kubelet Version:            v1.18.3
	*   Kube-Proxy Version:         v1.18.3
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (8 in total)
	*   Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-66bff467f8-dk2cz                                  100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	*   kube-system                 coredns-66bff467f8-mkt94                                  100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	*   kube-system                 etcd-functional-20200609111957-5469                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	*   kube-system                 kube-apiserver-functional-20200609111957-5469             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	*   kube-system                 kube-controller-manager-functional-20200609111957-5469    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	*   kube-system                 kube-proxy-7ljgt                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	*   kube-system                 kube-scheduler-functional-20200609111957-5469             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	*   kube-system                 storage-provisioner                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                750m (9%)   0 (0%)
	*   memory             140Mi (0%)  340Mi (1%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                From                                        Message
	*   ----    ------                   ----               ----                                        -------
	*   Normal  NodeHasSufficientMemory  13m (x5 over 13m)  kubelet, functional-20200609111957-5469     Node functional-20200609111957-5469 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    13m (x4 over 13m)  kubelet, functional-20200609111957-5469     Node functional-20200609111957-5469 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     13m (x4 over 13m)  kubelet, functional-20200609111957-5469     Node functional-20200609111957-5469 status is now: NodeHasSufficientPID
	*   Normal  Starting                 12m                kubelet, functional-20200609111957-5469     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  12m                kubelet, functional-20200609111957-5469     Node functional-20200609111957-5469 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    12m                kubelet, functional-20200609111957-5469     Node functional-20200609111957-5469 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     12m                kubelet, functional-20200609111957-5469     Node functional-20200609111957-5469 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             12m                kubelet, functional-20200609111957-5469     Node functional-20200609111957-5469 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  12m                kubelet, functional-20200609111957-5469     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                12m                kubelet, functional-20200609111957-5469     Node functional-20200609111957-5469 status is now: NodeReady
	*   Normal  Starting                 11m                kube-proxy, functional-20200609111957-5469  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [Jun 9 18:19] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:21] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:22] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +14.385288] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:23] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:24] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:26] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +21.550248] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:29] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +7.067810] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +8.013370] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +26.737018] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:30] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +10.497938] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +22.934605] tee (3660): /proc/30806/oom_adj is deprecated, please use /proc/30806/oom_score_adj instead.
	* [  +0.914867] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +3.850415] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:31] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +20.273438] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +21.292392] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:32] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +31.806818] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:33] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +12.539896] IPv4: martian source 10.1.0.2 from 10.1.0.2, on dev mybridge
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 4a d5 df 95 91 43 08 06        ......J....C..
	* 
	* ==> etcd [3212659b985a] <==
	* 2020-06-09 18:33:05.345971 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (333.550857ms) to execute
	* 2020-06-09 18:33:05.346259 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (2.402286209s) to execute
	* 2020-06-09 18:33:05.346364 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5463" took too long (269.503872ms) to execute
	* 2020-06-09 18:33:07.932416 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.999978023s) to execute
	* WARNING: 2020/06/09 18:33:07 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-06-09 18:33:08.660494 W | wal: sync duration of 3.139354894s, expected less than 1s
	* 2020-06-09 18:33:09.005122 W | etcdserver: request "header:<ID:12691269757017702189 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-scheduler\" mod_revision:1766 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-scheduler\" value_size:519 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-scheduler\" > >>" with result "size:16" took too long (344.32634ms) to execute
	* 2020-06-09 18:33:09.005623 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:529" took too long (3.620150761s) to execute
	* WARNING: 2020/06/09 18:33:10 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* WARNING: 2020/06/09 18:33:10 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-06-09 18:33:10.265893 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/namespace-controller\" " with result "error:context canceled" took too long (4.863576477s) to execute
	* WARNING: 2020/06/09 18:33:10 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-06-09 18:33:12.730664 W | wal: sync duration of 3.726003665s, expected less than 1s
	* 2020-06-09 18:33:13.194119 W | etcdserver: read-only range request "key:\"/registry/events\" range_end:\"/registry/eventt\" count_only:true " with result "range_response_count:0 size:7" took too long (4.235752819s) to execute
	* 2020-06-09 18:33:13.194773 W | etcdserver: request "header:<ID:12691269757017702192 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-controller-manager\" mod_revision:1765 > success:<request_put:<key:\"/registry/leases/kube-system/kube-controller-manager\" value_size:453 >> failure:<>>" with result "size:16" took too long (463.750919ms) to execute
	* 2020-06-09 18:33:13.228938 W | etcdserver: read-only range request "key:\"/registry/leases/kube-system/kube-scheduler\" " with result "error:context canceled" took too long (4.219945641s) to execute
	* WARNING: 2020/06/09 18:33:13 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	* 2020-06-09 18:33:13.283322 W | etcdserver: read-only range request "key:\"/registry/persistentvolumes\" range_end:\"/registry/persistentvolumet\" count_only:true " with result "range_response_count:0 size:5" took too long (4.042327574s) to execute
	* 2020-06-09 18:33:13.283770 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.984720044s) to execute
	* 2020-06-09 18:33:13.284049 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions\" range_end:\"/registry/controllerrevisiont\" count_only:true " with result "range_response_count:0 size:7" took too long (2.089318646s) to execute
	* 2020-06-09 18:33:13.284329 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (2.233784944s) to execute
	* 2020-06-09 18:33:13.284638 W | etcdserver: read-only range request "key:\"/registry/clusterroles\" range_end:\"/registry/clusterrolet\" count_only:true " with result "range_response_count:0 size:7" took too long (3.22529626s) to execute
	* 2020-06-09 18:33:13.284944 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts\" range_end:\"/registry/serviceaccountt\" count_only:true " with result "range_response_count:0 size:7" took too long (3.747348241s) to execute
	* 2020-06-09 18:33:14.178180 W | etcdserver: request "header:<ID:12691269757017702201 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.0.3\" mod_revision:1769 > success:<request_put:<key:\"/registry/masterleases/172.17.0.3\" value_size:65 lease:3467897720162926391 >> failure:<request_range:<key:\"/registry/masterleases/172.17.0.3\" > >>" with result "size:16" took too long (356.513158ms) to execute
	* 2020-06-09 18:33:14.178408 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-controller-manager-functional-20200609111957-5469.1616f3c4851d773c\" " with result "range_response_count:1 size:970" took too long (481.816418ms) to execute
	* 
	* ==> kernel <==
	*  18:33:16 up  1:15,  0 users,  load average: 27.16, 14.70, 6.83
	* Linux functional-20200609111957-5469 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 19.10"
	* 
	* ==> kube-apiserver [5a206fafe57f] <==
	* Trace[1930669715]: [3.632626414s] [3.632604812s] About to write a response
	* I0609 18:33:09.006996       1 trace.go:116] Trace[1637713341]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-06-09 18:33:05.383484514 +0000 UTC m=+771.700441031) (total time: 3.623476807s):
	* Trace[1637713341]: [3.623441895s] [3.622564598s] Transaction committed
	* I0609 18:33:09.007127       1 trace.go:116] Trace[1807653444]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.18.3 (linux/amd64) kubernetes/2e7996e/leader-election,client:172.17.0.3 (started: 2020-06-09 18:33:05.383181535 +0000 UTC m=+771.700138021) (total time: 3.623905628s):
	* Trace[1807653444]: [3.623842872s] [3.623648328s] Object stored in database
	* I0609 18:33:10.237318       1 trace.go:116] Trace[291393506]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-06-09 18:33:09.008393616 +0000 UTC m=+775.325350116) (total time: 1.228829124s):
	* Trace[291393506]: [1.228829124s] [1.228149314s] END
	* E0609 18:33:10.237379       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	* I0609 18:33:10.237694       1 trace.go:116] Trace[1681397257]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/leader-election,client:172.17.0.3 (started: 2020-06-09 18:33:09.008192646 +0000 UTC m=+775.325149134) (total time: 1.229468984s):
	* Trace[1681397257]: [1.229468984s] [1.229326486s] END
	* E0609 18:33:10.263564       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	* E0609 18:33:10.263577       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	* I0609 18:33:10.265053       1 trace.go:116] Trace[1601530090]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/namespace-controller,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/kube-controller-manager,client:172.17.0.3 (started: 2020-06-09 18:33:05.39856952 +0000 UTC m=+771.715526021) (total time: 4.866431097s):
	* Trace[1601530090]: [4.866431097s] [4.866415007s] END
	* I0609 18:33:13.197323       1 trace.go:116] Trace[1497685819]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-06-09 18:33:12.511591484 +0000 UTC m=+778.828547978) (total time: 685.681363ms):
	* Trace[1497685819]: [685.657625ms] [684.986027ms] Transaction committed
	* I0609 18:33:13.197525       1 trace.go:116] Trace[829572998]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20200609111957-5469,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:172.17.0.3 (started: 2020-06-09 18:33:12.511354326 +0000 UTC m=+778.828310893) (total time: 686.133758ms):
	* Trace[829572998]: [686.023913ms] [685.873335ms] Object stored in database
	* E0609 18:33:13.228507       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	* I0609 18:33:13.228842       1 trace.go:116] Trace[726371855]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.3 (linux/amd64) kubernetes/2e7996e/leader-election,client:172.17.0.3 (started: 2020-06-09 18:33:09.008495483 +0000 UTC m=+775.325451960) (total time: 4.22029673s):
	* Trace[726371855]: [4.22029673s] [4.220275697s] END
	* I0609 18:33:13.286113       1 trace.go:116] Trace[321663574]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-06-09 18:33:11.049759622 +0000 UTC m=+777.366716117) (total time: 2.236271311s):
	* Trace[321663574]: [2.236178551s] [2.236157035s] About to write a response
	* I0609 18:33:14.179341       1 trace.go:116] Trace[1718102036]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-06-09 18:33:13.289898909 +0000 UTC m=+779.606855410) (total time: 889.381617ms):
	* Trace[1718102036]: [889.333361ms] [886.923952ms] Transaction committed
	* 
	* ==> kube-controller-manager [312d0c751e58] <==
	* I0609 18:32:54.443595       1 resource_quota_controller.go:272] Starting resource quota controller
	* I0609 18:32:54.443613       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	* I0609 18:32:54.443642       1 resource_quota_monitor.go:303] QuotaMonitor running
	* I0609 18:32:54.971536       1 controllermanager.go:533] Started "garbagecollector"
	* I0609 18:32:54.972351       1 garbagecollector.go:133] Starting garbage collector controller
	* I0609 18:32:54.972375       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	* I0609 18:32:54.972406       1 graph_builder.go:282] GraphBuilder running
	* I0609 18:32:55.000186       1 controllermanager.go:533] Started "cronjob"
	* I0609 18:32:55.000367       1 cronjob_controller.go:97] Starting CronJob Manager
	* I0609 18:32:55.055856       1 controllermanager.go:533] Started "bootstrapsigner"
	* I0609 18:32:55.056119       1 shared_informer.go:223] Waiting for caches to sync for bootstrap_signer
	* I0609 18:32:55.074157       1 node_ipam_controller.go:94] Sending events to api server.
	* I0609 18:32:55.588160       1 request.go:621] Throttling request took 1.047652554s, request: GET:https://control-plane.minikube.internal:8441/apis/rbac.authorization.k8s.io/v1?timeout=32s
	* I0609 18:33:05.350352       1 range_allocator.go:82] Sending events to api server.
	* I0609 18:33:05.355519       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
	* I0609 18:33:05.355800       1 controllermanager.go:533] Started "nodeipam"
	* I0609 18:33:05.363512       1 node_ipam_controller.go:162] Starting ipam controller
	* I0609 18:33:05.363539       1 shared_informer.go:223] Waiting for caches to sync for node
	* I0609 18:33:05.397430       1 controllermanager.go:533] Started "attachdetach"
	* I0609 18:33:05.397586       1 attach_detach_controller.go:338] Starting attach detach controller
	* I0609 18:33:05.397610       1 shared_informer.go:223] Waiting for caches to sync for attach detach
	* E0609 18:33:10.237067       1 leaderelection.go:356] Failed to update lock: Put https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s: context deadline exceeded
	* I0609 18:33:10.237178       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition
	* F0609 18:33:10.237207       1 controllermanager.go:279] leaderelection lost
	* E0609 18:33:10.239340       1 shared_informer.go:226] unable to sync caches for attach detach
	* 
	* ==> kube-controller-manager [5357c47b58c3] <==
	* 
	* ==> kube-proxy [ed4b4330fb0e] <==
	* W0609 18:21:17.953703       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	* I0609 18:21:17.964859       1 node.go:136] Successfully retrieved node IP: 172.17.0.3
	* I0609 18:21:17.964907       1 server_others.go:186] Using iptables Proxier.
	* I0609 18:21:18.040683       1 server.go:583] Version: v1.18.3
	* I0609 18:21:18.041355       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I0609 18:21:18.041485       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0609 18:21:18.041553       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0609 18:21:18.042609       1 config.go:133] Starting endpoints config controller
	* I0609 18:21:18.042821       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	* I0609 18:21:18.043124       1 config.go:315] Starting service config controller
	* I0609 18:21:18.043145       1 shared_informer.go:223] Waiting for caches to sync for service config
	* I0609 18:21:18.143838       1 shared_informer.go:230] Caches are synced for endpoints config 
	* I0609 18:21:18.143838       1 shared_informer.go:230] Caches are synced for service config 
	* 
	* ==> kube-scheduler [5f3f0572459a] <==
	* I0609 18:32:38.013629       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	* I0609 18:32:38.013709       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	* I0609 18:32:38.852585       1 serving.go:313] Generated self-signed cert in-memory
	* I0609 18:32:39.253732       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	* I0609 18:32:39.253774       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	* W0609 18:32:39.257932       1 authorization.go:47] Authorization is disabled
	* W0609 18:32:39.257965       1 authentication.go:40] Authentication is disabled
	* I0609 18:32:39.257984       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	* I0609 18:32:39.260234       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	* I0609 18:32:39.260517       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	* I0609 18:32:39.260534       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	* I0609 18:32:39.260633       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* I0609 18:32:39.260765       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0609 18:32:39.260786       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0609 18:32:39.360722       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
	* I0609 18:32:39.361265       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	* I0609 18:32:39.361374       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* I0609 18:32:57.632364       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
	* E0609 18:33:13.228417       1 leaderelection.go:356] Failed to update lock: resource name may not be empty
	* I0609 18:33:13.228518       1 leaderelection.go:277] failed to renew lease kube-system/kube-scheduler: timed out waiting for the condition
	* F0609 18:33:13.228544       1 server.go:244] leaderelection lost
	* 
	* ==> kube-scheduler [c7b503b56b18] <==
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2020-06-09 18:20:00 UTC, end at Tue 2020-06-09 18:33:17 UTC. --
	* Jun 09 18:21:20 functional-20200609111957-5469 kubelet[2219]: W0609 18:21:20.885300    2219 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-dk2cz through plugin: invalid network status for
	* Jun 09 18:21:20 functional-20200609111957-5469 kubelet[2219]: W0609 18:21:20.888351    2219 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-mkt94 through plugin: invalid network status for
	* Jun 09 18:21:21 functional-20200609111957-5469 kubelet[2219]: W0609 18:21:21.898704    2219 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-mkt94 through plugin: invalid network status for
	* Jun 09 18:21:21 functional-20200609111957-5469 kubelet[2219]: W0609 18:21:21.904813    2219 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-dk2cz through plugin: invalid network status for
	* Jun 09 18:30:38 functional-20200609111957-5469 kubelet[2219]: I0609 18:30:38.740869    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 4a334cc3e5afc000b6023aecb7d1a2754eb8f9e901f621064b41548ccf03b785
	* Jun 09 18:30:38 functional-20200609111957-5469 kubelet[2219]: I0609 18:30:38.755756    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 833b483d3de7360b2c5c10ba491084d78ae4884bdc9ba58e11135ee78e486bd8
	* Jun 09 18:30:38 functional-20200609111957-5469 kubelet[2219]: I0609 18:30:38.757154    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b15791ec9abccd003898f05a44f7c1680be94814f206ca9daa3742e648229423
	* Jun 09 18:30:38 functional-20200609111957-5469 kubelet[2219]: E0609 18:30:38.758229    2219 pod_workers.go:191] Error syncing pod 0cc28924ac57b7780c934826bdeba80a ("kube-controller-manager-functional-20200609111957-5469_kube-system(0cc28924ac57b7780c934826bdeba80a)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20200609111957-5469_kube-system(0cc28924ac57b7780c934826bdeba80a)"
	* Jun 09 18:30:43 functional-20200609111957-5469 kubelet[2219]: I0609 18:30:43.695138    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b15791ec9abccd003898f05a44f7c1680be94814f206ca9daa3742e648229423
	* Jun 09 18:30:43 functional-20200609111957-5469 kubelet[2219]: E0609 18:30:43.698845    2219 pod_workers.go:191] Error syncing pod 0cc28924ac57b7780c934826bdeba80a ("kube-controller-manager-functional-20200609111957-5469_kube-system(0cc28924ac57b7780c934826bdeba80a)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20200609111957-5469_kube-system(0cc28924ac57b7780c934826bdeba80a)"
	* Jun 09 18:30:56 functional-20200609111957-5469 kubelet[2219]: I0609 18:30:56.496397    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b15791ec9abccd003898f05a44f7c1680be94814f206ca9daa3742e648229423
	* Jun 09 18:32:26 functional-20200609111957-5469 kubelet[2219]: I0609 18:32:26.773458    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b15791ec9abccd003898f05a44f7c1680be94814f206ca9daa3742e648229423
	* Jun 09 18:32:26 functional-20200609111957-5469 kubelet[2219]: I0609 18:32:26.773989    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5357c47b58c38fce32b70042b078be03a74096c6d6cfa1ba28f348ab345cceef
	* Jun 09 18:32:26 functional-20200609111957-5469 kubelet[2219]: E0609 18:32:26.774960    2219 pod_workers.go:191] Error syncing pod 0cc28924ac57b7780c934826bdeba80a ("kube-controller-manager-functional-20200609111957-5469_kube-system(0cc28924ac57b7780c934826bdeba80a)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20200609111957-5469_kube-system(0cc28924ac57b7780c934826bdeba80a)"
	* Jun 09 18:32:27 functional-20200609111957-5469 kubelet[2219]: I0609 18:32:27.791528    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c7b503b56b180973ed397c01e61d10e30f04674fdae1a42d616472c898d71cc8
	* Jun 09 18:32:27 functional-20200609111957-5469 kubelet[2219]: E0609 18:32:27.791921    2219 pod_workers.go:191] Error syncing pod a8caea92c80c24c844216eb1d68fe417 ("kube-scheduler-functional-20200609111957-5469_kube-system(a8caea92c80c24c844216eb1d68fe417)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-functional-20200609111957-5469_kube-system(a8caea92c80c24c844216eb1d68fe417)"
	* Jun 09 18:32:29 functional-20200609111957-5469 kubelet[2219]: I0609 18:32:29.416753    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 4a334cc3e5afc000b6023aecb7d1a2754eb8f9e901f621064b41548ccf03b785
	* Jun 09 18:32:33 functional-20200609111957-5469 kubelet[2219]: I0609 18:32:33.695069    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5357c47b58c38fce32b70042b078be03a74096c6d6cfa1ba28f348ab345cceef
	* Jun 09 18:32:37 functional-20200609111957-5469 kubelet[2219]: I0609 18:32:37.569896    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c7b503b56b180973ed397c01e61d10e30f04674fdae1a42d616472c898d71cc8
	* Jun 09 18:33:15 functional-20200609111957-5469 kubelet[2219]: I0609 18:33:15.415931    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5357c47b58c38fce32b70042b078be03a74096c6d6cfa1ba28f348ab345cceef
	* Jun 09 18:33:15 functional-20200609111957-5469 kubelet[2219]: I0609 18:33:15.418334    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 312d0c751e5845844acaa9af7ba6a07d6f099dba5076a2852459e4f0b84294b8
	* Jun 09 18:33:15 functional-20200609111957-5469 kubelet[2219]: E0609 18:33:15.419275    2219 pod_workers.go:191] Error syncing pod 0cc28924ac57b7780c934826bdeba80a ("kube-controller-manager-functional-20200609111957-5469_kube-system(0cc28924ac57b7780c934826bdeba80a)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-functional-20200609111957-5469_kube-system(0cc28924ac57b7780c934826bdeba80a)"
	* Jun 09 18:33:15 functional-20200609111957-5469 kubelet[2219]: I0609 18:33:15.435175    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5f3f0572459a8d972cd5b79ae2f36c01651a7b6a8d7337c6c10f15b20497a462
	* Jun 09 18:33:15 functional-20200609111957-5469 kubelet[2219]: E0609 18:33:15.435740    2219 pod_workers.go:191] Error syncing pod a8caea92c80c24c844216eb1d68fe417 ("kube-scheduler-functional-20200609111957-5469_kube-system(a8caea92c80c24c844216eb1d68fe417)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-functional-20200609111957-5469_kube-system(a8caea92c80c24c844216eb1d68fe417)"
	* Jun 09 18:33:15 functional-20200609111957-5469 kubelet[2219]: I0609 18:33:15.487501    2219 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c7b503b56b180973ed397c01e61d10e30f04674fdae1a42d616472c898d71cc8
	* 
	* ==> storage-provisioner [5c660236666a] <==
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0609 11:33:16.673085    8724 logs.go:178] command /bin/bash -c "docker logs --tail 25 5357c47b58c3" failed with error: /bin/bash -c "docker logs --tail 25 5357c47b58c3": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 5357c47b58c3
	 output: "\n** stderr ** \nError: No such container: 5357c47b58c3\n\n** /stderr **"
	E0609 11:33:17.001910    8724 logs.go:178] command /bin/bash -c "docker logs --tail 25 c7b503b56b18" failed with error: /bin/bash -c "docker logs --tail 25 c7b503b56b18": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: c7b503b56b18
	 output: "\n** stderr ** \nError: No such container: c7b503b56b18\n\n** /stderr **"
	! unable to fetch logs for: kube-controller-manager [5357c47b58c3], kube-scheduler [c7b503b56b18]

                                                
                                                
** /stderr **
helpers.go:241: failed logs error: exit status 69

                                                
                                    

Test pass (119/128)

x
+
TestDownloadOnly/crio/v1.13.0 (4.37s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.13.0
--- PASS: TestDownloadOnly/crio/v1.13.0 (4.37s)
aaa_download_only_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p crio-20200609103556-5469 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=crio --driver=docker 
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20200609103556-5469 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=crio --driver=docker : (4.365597658s)

                                                
                                    
x
+
TestDownloadOnly/crio/v1.18.3 (2.46s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.18.3
--- PASS: TestDownloadOnly/crio/v1.18.3 (2.46s)
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p crio-20200609103556-5469 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=crio --driver=docker 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20200609103556-5469 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=crio --driver=docker : (2.457246637s)

                                                
                                    
x
+
TestDownloadOnly/crio/v1.18.4-rc.0 (3.11s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.18.4-rc.0
--- PASS: TestDownloadOnly/crio/v1.18.4-rc.0 (3.11s)
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p crio-20200609103556-5469 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=crio --driver=docker 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20200609103556-5469 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=crio --driver=docker : (3.111672583s)

                                                
                                    
x
+
TestDownloadOnly/crio/DeleteAll (2.35s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/DeleteAll
--- PASS: TestDownloadOnly/crio/DeleteAll (2.35s)
aaa_download_only_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete --all
aaa_download_only_test.go:133: (dbg) Done: out/minikube-linux-amd64 delete --all: (2.350147616s)

                                                
                                    
x
+
TestDownloadOnly/crio/DeleteAlwaysSucceeds (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/DeleteAlwaysSucceeds
--- PASS: TestDownloadOnly/crio/DeleteAlwaysSucceeds (0.26s)
aaa_download_only_test.go:145: (dbg) Run:  out/minikube-linux-amd64 delete -p crio-20200609103556-5469
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p crio-20200609103556-5469

                                                
                                    
x
+
TestDownloadOnly/docker/v1.13.0 (6.72s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.13.0
--- PASS: TestDownloadOnly/docker/v1.13.0 (6.72s)
aaa_download_only_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p docker-20200609103609-5469 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20200609103609-5469 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=docker --driver=docker : (6.719835768s)

                                                
                                    
x
+
TestDownloadOnly/docker/v1.18.3 (4.78s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.18.3
--- PASS: TestDownloadOnly/docker/v1.18.3 (4.78s)
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p docker-20200609103609-5469 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20200609103609-5469 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=docker --driver=docker : (4.782149771s)

                                                
                                    
x
+
TestDownloadOnly/docker/v1.18.4-rc.0 (6.22s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.18.4-rc.0
--- PASS: TestDownloadOnly/docker/v1.18.4-rc.0 (6.22s)
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p docker-20200609103609-5469 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20200609103609-5469 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=docker --driver=docker : (6.223911288s)

                                                
                                    
x
+
TestDownloadOnly/docker/DeleteAll (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/DeleteAll
--- PASS: TestDownloadOnly/docker/DeleteAll (0.40s)
aaa_download_only_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete --all

                                                
                                    
x
+
TestDownloadOnly/docker/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/DeleteAlwaysSucceeds
--- PASS: TestDownloadOnly/docker/DeleteAlwaysSucceeds (0.22s)
aaa_download_only_test.go:145: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-20200609103609-5469
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-20200609103609-5469

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.13.0 (9.09s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.13.0
--- PASS: TestDownloadOnly/containerd/v1.13.0 (9.09s)
aaa_download_only_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p containerd-20200609103628-5469 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=containerd --driver=docker 
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20200609103628-5469 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=containerd --driver=docker : (9.089518296s)

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.18.3 (16.45s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.18.3
--- PASS: TestDownloadOnly/containerd/v1.18.3 (16.45s)
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p containerd-20200609103628-5469 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=containerd --driver=docker 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20200609103628-5469 --force --alsologtostderr --kubernetes-version=v1.18.3 --container-runtime=containerd --driver=docker : (16.445776703s)

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.18.4-rc.0 (13.64s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.18.4-rc.0
--- PASS: TestDownloadOnly/containerd/v1.18.4-rc.0 (13.64s)
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p containerd-20200609103628-5469 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=containerd --driver=docker 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20200609103628-5469 --force --alsologtostderr --kubernetes-version=v1.18.4-rc.0 --container-runtime=containerd --driver=docker : (13.641633939s)

                                                
                                    
x
+
TestDownloadOnly/containerd/DeleteAll (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/DeleteAll
--- PASS: TestDownloadOnly/containerd/DeleteAll (0.44s)
aaa_download_only_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete --all

                                                
                                    
x
+
TestDownloadOnly/containerd/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/DeleteAlwaysSucceeds
--- PASS: TestDownloadOnly/containerd/DeleteAlwaysSucceeds (0.24s)
aaa_download_only_test.go:145: (dbg) Run:  out/minikube-linux-amd64 delete -p containerd-20200609103628-5469
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p containerd-20200609103628-5469

                                                
                                    
x
+
TestDownloadOnlyKic (2.72s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
--- PASS: TestDownloadOnlyKic (2.72s)
aaa_download_only_test.go:168: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20200609103708-5469 --force --alsologtostderr --driver=docker 
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20200609103708-5469

                                                
                                    
x
+
TestOffline/group/docker (105.68s)

                                                
                                                
=== RUN   TestOffline/group/docker
=== PAUSE TestOffline/group/docker

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/docker
--- PASS: TestOffline/group/docker (105.68s)
aab_offline_test.go:53: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20200609103710-5469 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=docker 
aab_offline_test.go:53: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20200609103710-5469 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=docker : (1m42.972977748s)
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20200609103710-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20200609103710-5469: (2.703993864s)

                                                
                                    
x
+
TestOffline/group/crio (163.18s)

                                                
                                                
=== RUN   TestOffline/group/crio
=== PAUSE TestOffline/group/crio

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/crio
--- PASS: TestOffline/group/crio (163.18s)
aab_offline_test.go:53: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20200609103710-5469 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=docker 
aab_offline_test.go:53: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20200609103710-5469 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=docker : (2m40.301088108s)
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-20200609103710-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20200609103710-5469: (2.875415339s)

                                                
                                    
x
+
TestOffline/group/containerd (87.22s)

                                                
                                                
=== RUN   TestOffline/group/containerd
=== PAUSE TestOffline/group/containerd

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/containerd
--- PASS: TestOffline/group/containerd (87.22s)
aab_offline_test.go:53: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20200609103710-5469 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=docker 
aab_offline_test.go:53: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20200609103710-5469 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=docker : (1m24.147699708s)
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20200609103710-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20200609103710-5469: (3.073770027s)

                                                
                                    
x
+
TestCertOptions (213.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
--- PASS: TestCertOptions (213.46s)
cert_options_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20200609112904-5469 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker 
cert_options_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20200609112904-5469 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker : (3m27.462669073s)
cert_options_test.go:57: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20200609112904-5469 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:72: (dbg) Run:  kubectl --context cert-options-20200609112904-5469 config view
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20200609112904-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20200609112904-5469: (5.053657224s)

                                                
                                    
x
+
TestDockerFlags (150.04s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
> docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s    > docker-machine-driver-kvm2: 19.19 MiB / 48.57 MiB [-->___] 39.50% ? p/s ?    > docker-machine-driver-kvm2: 38.73 MiB / 48.57 MiB [---->_] 79.73% ? p/s ?    > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB  100.00% 160.50 MiB p/s    > docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s    > docker-machine-driver-kvm2: 17.44 MiB / 48.57 MiB [-->___] 35.90% ? p/s ?    > docker-machine-driver-kvm2: 39.04 MiB / 48.57 MiB [---->_] 80.37% ? p/s ?    > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB  100.00% 161.37 MiB p/s--- PASS: TestKVMDriverInstallOrUpdate (2.63s)
--- PASS: TestDockerFlags (150.04s)
docker_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20200609112904-5469 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20200609112904-5469 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (2m26.02576817s)
docker_test.go:46: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20200609112904-5469 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:57: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20200609112904-5469 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20200609112904-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20200609112904-5469: (2.963796814s)

                                                
                                    
x
+
TestForceSystemdFlag (233s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
--- PASS: TestForceSystemdFlag (233.00s)
docker_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20200609112904-5469 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20200609112904-5469 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=docker : (3m48.928256305s)
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20200609112904-5469 ssh "docker info --format {{.CgroupDriver}}"
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20200609112904-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20200609112904-5469: (3.282216814s)

                                                
                                    
x
+
TestForceSystemdEnv (97.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
--- PASS: TestForceSystemdEnv (97.88s)
docker_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20200609112904-5469 --memory=1800 --alsologtostderr -v=5 --driver=docker 
docker_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20200609112904-5469 --memory=1800 --alsologtostderr -v=5 --driver=docker : (1m34.019315977s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20200609112904-5469 ssh "docker info --format {{.CgroupDriver}}"
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20200609112904-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20200609112904-5469: (3.07114714s)

                                                
                                    
x
+
TestErrorSpam (69.79s)

                                                
                                                
=== RUN   TestErrorSpam
=== PAUSE TestErrorSpam

                                                
                                                

                                                
                                                
=== CONT  TestErrorSpam
--- PASS: TestErrorSpam (69.79s)
error_spam_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20200609112904-5469 -n=1 --memory=2250 --wait=false --driver=docker 
error_spam_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20200609112904-5469 -n=1 --memory=2250 --wait=false --driver=docker : (1m6.313293473s)
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p nospam-20200609112904-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p nospam-20200609112904-5469: (3.473888055s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)
functional_test.go:925: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/files/etc/test/nested/copy/5469/hosts

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
--- PASS: TestFunctional/serial/StartWithProxy (84.92s)
functional_test.go:221: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20200609111957-5469 --memory=2800 --apiserver-port=8441 --wait=true --driver=docker 
functional_test.go:221: (dbg) Done: out/minikube-linux-amd64 start -p functional-20200609111957-5469 --memory=2800 --apiserver-port=8441 --wait=true --driver=docker : (1m24.917318651s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (4.82s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
--- PASS: TestFunctional/serial/SoftStart (4.82s)
functional_test.go:253: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20200609111957-5469
functional_test.go:253: (dbg) Done: out/minikube-linux-amd64 start -p functional-20200609111957-5469: (4.81704017s)
functional_test.go:257: soft start took 4.818469482s for "functional-20200609111957-5469" cluster.

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
--- PASS: TestFunctional/serial/KubeContext (0.16s)
functional_test.go:274: (dbg) Run:  kubectl config current-context

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
--- PASS: TestFunctional/serial/KubectlGetPods (0.73s)
functional_test.go:287: (dbg) Run:  kubectl --context functional-20200609111957-5469 get po -A

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add (3.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add
--- PASS: TestFunctional/serial/CacheCmd/cache/add (3.91s)
functional_test.go:488: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 cache add busybox:latest
functional_test.go:488: (dbg) Done: out/minikube-linux-amd64 -p functional-20200609111957-5469 cache add busybox:latest: (1.178486869s)
functional_test.go:488: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 cache add busybox:1.28.4-glibc
functional_test.go:488: (dbg) Done: out/minikube-linux-amd64 -p functional-20200609111957-5469 cache add busybox:1.28.4-glibc: (1.300026576s)
functional_test.go:488: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 cache add k8s.gcr.io/pause:latest
functional_test.go:488: (dbg) Done: out/minikube-linux-amd64 -p functional-20200609111957-5469 cache add k8s.gcr.io/pause:latest: (1.427170627s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_busybox:1.28.4-glibc (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_busybox:1.28.4-glibc
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_busybox:1.28.4-glibc (0.05s)
functional_test.go:495: (dbg) Run:  out/minikube-linux-amd64 cache delete busybox:1.28.4-glibc

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)
functional_test.go:502: (dbg) Run:  out/minikube-linux-amd64 cache list

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)
functional_test.go:515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh sudo crictl images

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)
functional_test.go:528: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh sudo docker rmi busybox:latest
functional_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh sudo crictl inspecti busybox:latest
functional_test.go:534: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh sudo crictl inspecti busybox:latest: exit status 1 (327.110548ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "busybox:latest" present       

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 cache reload
functional_test.go:544: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh sudo crictl inspecti busybox:latest

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.36s)
functional_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 kubectl -- --context functional-20200609111957-5469 get pods

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.53s)
multinode_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20200609112134-5469 --wait=true --memory=2200 --nodes=2 --driver=docker 
multinode_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20200609112134-5469 --wait=true --memory=2200 --nodes=2 --driver=docker : (1m3.846915818s)
multinode_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 status --alsologtostderr

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
--- PASS: TestMultiNode/serial/AddNode (24.37s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20200609112134-5469 -v 3 --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20200609112134-5469 -v 3 --alsologtostderr: (23.441923666s)
multinode_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 status --alsologtostderr

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
--- PASS: TestMultiNode/serial/StopNode (2.83s)
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 node stop m03
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 -p multinode-20200609112134-5469 node stop m03: (1.414383646s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 status
multinode_test.go:117: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20200609112134-5469 status: exit status 7 (722.399337ms)

                                                
                                                
-- stdout --
	multinode-20200609112134-5469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20200609112134-5469-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20200609112134-5469-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:124: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 status --alsologtostderr
multinode_test.go:124: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20200609112134-5469 status --alsologtostderr: exit status 7 (695.663107ms)

                                                
                                                
-- stdout --
	multinode-20200609112134-5469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20200609112134-5469-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20200609112134-5469-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0609 11:23:05.689394   22672 mustload.go:64] Loading cluster: multinode-20200609112134-5469
	I0609 11:23:05.689670   22672 status.go:123] checking status of  ...
	I0609 11:23:05.690156   22672 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469 --format={{.State.Status}}
	I0609 11:23:05.747099   22672 status.go:188] multinode-20200609112134-5469 host status = "Running" (err=<nil>)
	I0609 11:23:05.747134   22672 host.go:65] Checking if "multinode-20200609112134-5469" exists ...
	I0609 11:23:05.747552   22672 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20200609112134-5469
	I0609 11:23:05.803493   22672 host.go:65] Checking if "multinode-20200609112134-5469" exists ...
	I0609 11:23:05.803809   22672 system_pods.go:160] Checking kubelet status ...
	I0609 11:23:05.803873   22672 ssh_runner.go:148] Run: systemctl --version
	I0609 11:23:05.803922   22672 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469
	I0609 11:23:05.860520   22672 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469/id_rsa Username:docker}
	I0609 11:23:05.960500   22672 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
	I0609 11:23:05.972903   22672 status.go:232] multinode-20200609112134-5469 kubelet status = Running
	I0609 11:23:05.974154   22672 kubeconfig.go:93] found "multinode-20200609112134-5469" server: "https://172.17.0.4:8443"
	I0609 11:23:05.974184   22672 api_server.go:145] Checking apiserver status ...
	I0609 11:23:05.974223   22672 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0609 11:23:05.986631   22672 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/1757/cgroup
	I0609 11:23:05.997047   22672 api_server.go:161] apiserver freezer: "5:freezer:/docker/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa/kubepods/burstable/pod8ea3eff5406575bb435cdaedf3d1c764/d4060bb4cff474211d1a504c5d218c346dc3b9cd66ae99446df17835ea778911"
	I0609 11:23:05.997123   22672 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa/kubepods/burstable/pod8ea3eff5406575bb435cdaedf3d1c764/d4060bb4cff474211d1a504c5d218c346dc3b9cd66ae99446df17835ea778911/freezer.state
	I0609 11:23:06.006384   22672 api_server.go:183] freezer state: "THAWED"
	I0609 11:23:06.006453   22672 api_server.go:193] Checking apiserver healthz at https://172.17.0.4:8443/healthz ...
	I0609 11:23:06.012256   22672 api_server.go:213] https://172.17.0.4:8443/healthz returned 200:
	ok
	I0609 11:23:06.012284   22672 status.go:253] multinode-20200609112134-5469 apiserver status = Running (err=<nil>)
	I0609 11:23:06.012295   22672 status.go:126] multinode-20200609112134-5469 status: &{Name:multinode-20200609112134-5469 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false}
	I0609 11:23:06.012313   22672 status.go:123] checking status of m02 ...
	I0609 11:23:06.012621   22672 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m02 --format={{.State.Status}}
	I0609 11:23:06.068758   22672 status.go:188] multinode-20200609112134-5469-m02 host status = "Running" (err=<nil>)
	I0609 11:23:06.068787   22672 host.go:65] Checking if "multinode-20200609112134-5469-m02" exists ...
	I0609 11:23:06.069153   22672 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20200609112134-5469-m02
	I0609 11:23:06.127102   22672 host.go:65] Checking if "multinode-20200609112134-5469-m02" exists ...
	I0609 11:23:06.127442   22672 system_pods.go:160] Checking kubelet status ...
	I0609 11:23:06.127518   22672 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
	I0609 11:23:06.127558   22672 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20200609112134-5469-m02
	I0609 11:23:06.184415   22672 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32795 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube/machines/multinode-20200609112134-5469-m02/id_rsa Username:docker}
	I0609 11:23:06.275520   22672 status.go:232] multinode-20200609112134-5469-m02 kubelet status = Running
	I0609 11:23:06.275549   22672 status.go:126] multinode-20200609112134-5469-m02 status: &{Name:multinode-20200609112134-5469-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true}
	I0609 11:23:06.275578   22672 status.go:123] checking status of m03 ...
	I0609 11:23:06.275897   22672 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m03 --format={{.State.Status}}
	I0609 11:23:06.332085   22672 status.go:188] multinode-20200609112134-5469-m03 host status = "Stopped" (err=<nil>)
	I0609 11:23:06.332115   22672 status.go:201] host is not running, skipping remaining checks
	I0609 11:23:06.332122   22672 status.go:126] multinode-20200609112134-5469-m03 status: &{Name:multinode-20200609112134-5469-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true}

                                                
                                                
** /stderr **

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
--- PASS: TestMultiNode/serial/DeleteNode (6.06s)
multinode_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 node delete m03
multinode_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p multinode-20200609112134-5469 node delete m03: (5.369482284s)
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 status --alsologtostderr
multinode_test.go:265: (dbg) Run:  docker volume ls

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (12.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
--- PASS: TestMultiNode/serial/StopMultiNode (12.80s)
multinode_test.go:183: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 stop
multinode_test.go:183: (dbg) Done: out/minikube-linux-amd64 -p multinode-20200609112134-5469 stop: (12.481359151s)
multinode_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 status
multinode_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20200609112134-5469 status: exit status 7 (163.604071ms)

                                                
                                                
-- stdout --
	multinode-20200609112134-5469
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20200609112134-5469-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:196: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 status --alsologtostderr
multinode_test.go:196: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20200609112134-5469 status --alsologtostderr: exit status 7 (158.974803ms)

                                                
                                                
-- stdout --
	multinode-20200609112134-5469
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20200609112134-5469-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0609 11:24:37.946390   28131 mustload.go:64] Loading cluster: multinode-20200609112134-5469
	I0609 11:24:37.946737   28131 status.go:123] checking status of  ...
	I0609 11:24:37.947291   28131 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469 --format={{.State.Status}}
	I0609 11:24:38.002624   28131 status.go:188] multinode-20200609112134-5469 host status = "Stopped" (err=<nil>)
	I0609 11:24:38.002659   28131 status.go:201] host is not running, skipping remaining checks
	I0609 11:24:38.002668   28131 status.go:126] multinode-20200609112134-5469 status: &{Name:multinode-20200609112134-5469 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false}
	I0609 11:24:38.002690   28131 status.go:123] checking status of m02 ...
	I0609 11:24:38.003066   28131 cli_runner.go:108] Run: docker container inspect multinode-20200609112134-5469-m02 --format={{.State.Status}}
	I0609 11:24:38.057746   28131 status.go:188] multinode-20200609112134-5469-m02 host status = "Stopped" (err=<nil>)
	I0609 11:24:38.057776   28131 status.go:201] host is not running, skipping remaining checks
	I0609 11:24:38.057785   28131 status.go:126] multinode-20200609112134-5469-m02 status: &{Name:multinode-20200609112134-5469-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true}

                                                
                                                
** /stderr **

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (110.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
--- PASS: TestMultiNode/serial/RestartMultiNode (110.80s)
multinode_test.go:212: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20200609112134-5469 --driver=docker 
multinode_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20200609112134-5469 --driver=docker : (1m50.057555737s)
multinode_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 status --alsologtostderr
multinode_test.go:60: *** TestMultiNode FAILED at 2020-06-09 11:26:28.862744666 -0700 PDT m=+3032.224600285
helpers.go:214: -----------------------post-mortem--------------------------------
helpers.go:222: ======>  post-mortem[TestMultiNode]: docker inspect <======
helpers.go:223: (dbg) Run:  docker inspect multinode-20200609112134-5469
helpers.go:227: (dbg) docker inspect multinode-20200609112134-5469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa",
	        "Created": "2020-06-09T18:21:35.901912908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28384,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2020-06-09T18:24:39.64185028Z",
	            "FinishedAt": "2020-06-09T18:24:35.894651527Z"
	        },
	        "Image": "sha256:e6bc41c39dc48b2b472936db36aedb28527ce0f675ed1bc20d029125c9ccf578",
	        "ResolvConfPath": "/var/lib/docker/containers/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa/hosts",
	        "LogPath": "/var/lib/docker/containers/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa/4fa316c228636c38751b12f429c3e0ba46a438854d56e4f9a0da336d65914ffa-json.log",
	        "Name": "/multinode-20200609112134-5469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20200609112134-5469:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": -1,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a80949e3a62d15663d922c82fb78d9a89c2e63044757613442b03f4e61abfcbf-init/diff:/var/lib/docker/overlay2/842cfb80f5123bafae2466fc7efa639aa41e065f3255b19f9debf027ea5ee70f/diff:/var/lib/docker/overlay2/52955c8ec40656be74515789d00b745e87d9b7fef6138e7b17a5363a06dbcfa5/diff:/var/lib/docker/overlay2/03cddd8e08a064f361b14f4944cfb79c7f8046479d95520269069705f7ab0528/diff:/var/lib/docker/overlay2/c64285a2182b3e7c4c0b57464030adbef4778934f113881df08564634b1f6221/diff:/var/lib/docker/overlay2/90f13b458ed1b350c6216e1ace4dd61d3d2d9dfee23ffc01aa7c9bb98bd421f6/diff:/var/lib/docker/overlay2/fe1683c816f3c3398f9921579d07f6c594583c7c0e5afad822f05cb5888c1268/diff:/var/lib/docker/overlay2/10612719aad9c166640f8cee6edd67101fe099610e2f6c88fcb61b31af35fd9d/diff:/var/lib/docker/overlay2/7c4cc5926eeaa0fefbc7d4a40004d880251629462c856500bafda9daac74d118/diff:/var/lib/docker/overlay2/9aa9a9f3601aea1f46ee059e5089e93043b90fd2fd30e3cd2d15f9183becf2a5/diff:/var/lib/docker/overlay2/5b620b
7b826525fd3203105b70fc1df648dcf00d91b123f32977d15a9aa24d42/diff:/var/lib/docker/overlay2/430918b4b183807894e9422553842dab55b537cc61905b96da054e1bd70225c3/diff:/var/lib/docker/overlay2/487a49458a3b877836066ca9e28d566b97e11dcaeaaa3b2645fb4c57d9e4322f/diff:/var/lib/docker/overlay2/02a4aa873547c0f7358529bad7f6983f4ae79dda4704251d86f5cffd924ecc22/diff:/var/lib/docker/overlay2/57242607bb68a1205e6073d4d78984d3a8ca810645de93f0578d911ff171e91f/diff:/var/lib/docker/overlay2/f7b86afeb24318436caa8fb2ecc416589f3e02ddec1addf6f367987b50ec4671/diff:/var/lib/docker/overlay2/f18bbd9e4f03562d739288185addb9e977807f3f93d0637976cc612e9e703752/diff:/var/lib/docker/overlay2/4a3511ac2d9c89e7a38909f5646b9a5983e5fbd4b20269aa0a438365ac9d960a/diff:/var/lib/docker/overlay2/3a357f9db4e41d2c676e3426a10c5404f0d121c954ac8cae7b1d34babb42323e/diff:/var/lib/docker/overlay2/422f1db82f9e94b7c185a899dfd8d725528b6ffa7b344759697faeae9246dd79/diff:/var/lib/docker/overlay2/135303c7fde9f4ebf5c3b0dfd5d9bc4a70c2bd3d259345543f4b85328bf5afab/diff:/var/lib/d
ocker/overlay2/54798ffee37e6b1949e5e9cb69ea12f7d2fceb53b37445ea1739701a82bae4f3/diff:/var/lib/docker/overlay2/f0432ec26d1b881669832c1d9e9179a47fd26f19eb4ddfba1232f2c00b978c33/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a80949e3a62d15663d922c82fb78d9a89c2e63044757613442b03f4e61abfcbf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a80949e3a62d15663d922c82fb78d9a89c2e63044757613442b03f4e61abfcbf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a80949e3a62d15663d922c82fb78d9a89c2e63044757613442b03f4e61abfcbf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-20200609112134-5469",
	                "Source": "/var/lib/docker/volumes/multinode-20200609112134-5469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20200609112134-5469",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20200609112134-5469",
	                "name.minikube.sigs.k8s.io": "multinode-20200609112134-5469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4154fd277b4ee2e0296bd3fb75c909ec77432598afd64f8119d1897acc24c162",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32806"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32804"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4154fd277b4e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "f0fe9a43b47d8b6eb6ef87393fda4515d04d0b1dd04db3120e80f7bef9fc090f",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "1fddf8d61680b60b987eb147ce51d80fbf33310bf69844ebbd2f62729313f1ae",
	                    "EndpointID": "f0fe9a43b47d8b6eb6ef87393fda4515d04d0b1dd04db3120e80f7bef9fc090f",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers.go:231: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20200609112134-5469 -n multinode-20200609112134-5469
helpers.go:236: <<< TestMultiNode FAILED: start of post-mortem logs <<<
helpers.go:237: ======>  post-mortem[TestMultiNode]: minikube logs <======
helpers.go:239: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20200609112134-5469 logs -n 25
helpers.go:239: (dbg) Done: out/minikube-linux-amd64 -p multinode-20200609112134-5469 logs -n 25: (2.126410148s)
helpers.go:244: TestMultiNode logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Tue 2020-06-09 18:24:40 UTC, end at Tue 2020-06-09 18:26:30 UTC. --
	* Jun 09 18:24:56 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:56.578220334Z" level=warning msg="949c1b35e2481c726ef99da93f192999d767e7e2505ce7065d83aea900a02c8b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/949c1b35e2481c726ef99da93f192999d767e7e2505ce7065d83aea900a02c8b/mounts/shm, flags: 0x2: no such file or directory"
	* Jun 09 18:24:56 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:56.647415640Z" level=info msg="shim containerd-shim started" address=/containerd-shim/fc92ec0681609abff5b4728cdb2145d0a2126692bcb873e5147d1046edbce50b.sock debug=false pid=2665
	* Jun 09 18:24:56 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:56.762733986Z" level=info msg="shim containerd-shim started" address=/containerd-shim/83ec0041c3b744b6977a18dc6ee780556076b983a330c4e68e10b74fbf36eba6.sock debug=false pid=2707
	* Jun 09 18:24:56 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:56.764072771Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1d230d37c3fd98da2b574ff793873165e84a9cf3507cb1a5f5300ef9abde0f31.sock debug=false pid=2711
	* Jun 09 18:24:56 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:56.864445651Z" level=info msg="shim reaped" id=935e8b1b45d3b5bb134f6a7e3bdd7e83b34ecc032dcd6c14b155ea3f61981612
	* Jun 09 18:24:56 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:56.866651422Z" level=info msg="shim containerd-shim started" address=/containerd-shim/2c21c48dc44d985b93e2e22add1b00bd50b5895ac078869b449699c12f857dc9.sock debug=false pid=2775
	* Jun 09 18:24:56 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:56.943754662Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Jun 09 18:24:57 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:57.170698869Z" level=info msg="shim containerd-shim started" address=/containerd-shim/b05afbf9feb59051b7cc95e7d1d4164cc64ebb032bf97029c0a4960c8c43f7b1.sock debug=false pid=2839
	* Jun 09 18:24:57 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:57.254742501Z" level=info msg="shim containerd-shim started" address=/containerd-shim/27abab08e8c51c5a387f022e2788fb984d26bb049e5856e5b950195858f48a2b.sock debug=false pid=2854
	* Jun 09 18:24:57 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:57.355446706Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Jun 09 18:24:57 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:57.465999065Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Jun 09 18:24:57 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:57.469377047Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Jun 09 18:24:57 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:57.483110342Z" level=info msg="shim containerd-shim started" address=/containerd-shim/b9cd23f64292b96c059d76d1c75ccb06bf56df79670995832948daff9f59bb0e.sock debug=false pid=2952
	* Jun 09 18:24:57 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:57.663321262Z" level=info msg="shim containerd-shim started" address=/containerd-shim/c608896793e63765bf9c85bf1faa6dddee098df28b66e0428babd91ff991089f.sock debug=false pid=2983
	* Jun 09 18:24:57 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:57.665087121Z" level=info msg="shim containerd-shim started" address=/containerd-shim/94885fa046ff34ae2780d2824aa61d821993c82363ec1fbe3b07a1a92f1c37ec.sock debug=false pid=2987
	* Jun 09 18:24:58 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:58.090481569Z" level=info msg="shim reaped" id=da36dd84ccec04931944218ddb1fbc260672f20c6adb290ac8100fbcca4135c0
	* Jun 09 18:24:58 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:58.101503039Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Jun 09 18:24:58 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:24:58.101645430Z" level=warning msg="da36dd84ccec04931944218ddb1fbc260672f20c6adb290ac8100fbcca4135c0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/da36dd84ccec04931944218ddb1fbc260672f20c6adb290ac8100fbcca4135c0/mounts/shm, flags: 0x2: no such file or directory"
	* Jun 09 18:25:02 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:25:02.048586412Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e1f541f02f41d87e2c3ed1fba816449481d7e097399d2f1175c7ff131c4090f2.sock debug=false pid=3105
	* Jun 09 18:25:12 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:25:12.359375526Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	* Jun 09 18:25:12 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:25:12.453782702Z" level=info msg="shim containerd-shim started" address=/containerd-shim/99153bced5134abe038d30a62aa3221473bc8c1054a30128d991602cead0d11b.sock debug=false pid=3360
	* Jun 09 18:25:27 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:25:27.857997645Z" level=info msg="shim reaped" id=7f709e9a5b2f81df4c71039cc800507992182cd05375b0058807d3198e98fbeb
	* Jun 09 18:25:27 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:25:27.868342314Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	* Jun 09 18:25:27 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:25:27.868484338Z" level=warning msg="7f709e9a5b2f81df4c71039cc800507992182cd05375b0058807d3198e98fbeb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7f709e9a5b2f81df4c71039cc800507992182cd05375b0058807d3198e98fbeb/mounts/shm, flags: 0x2: no such file or directory"
	* Jun 09 18:25:42 multinode-20200609112134-5469 dockerd[109]: time="2020-06-09T18:25:42.438416633Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1d793e056e9298c8ea4c531526c92b3ee0f85f194c94af6785dfbed94359d184.sock debug=false pid=3617
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	* 984910f542eae       4689081edb103       48 seconds ago       Running             storage-provisioner       2                   7b36d992e0bb5
	* 47bc00d1e107a       2186a1a396deb       About a minute ago   Running             kindnet-cni               2                   37154d7465c18
	* c7888b70b6f68       7e28efa976bd1       About a minute ago   Running             kube-apiserver            2                   8320ba5b50334
	* 4452b23102647       67da37a9a360e       About a minute ago   Running             coredns                   1                   e3d60753b3dfe
	* abebf43541dc4       67da37a9a360e       About a minute ago   Running             coredns                   1                   3524659739df9
	* da36dd84ccec0       2186a1a396deb       About a minute ago   Exited              kindnet-cni               1                   37154d7465c18
	* 7f709e9a5b2f8       4689081edb103       About a minute ago   Exited              storage-provisioner       1                   7b36d992e0bb5
	* 9010faa64c192       3439b7546f29b       About a minute ago   Running             kube-proxy                1                   e39ccb9f48d23
	* 4e6aced09728d       7e28efa976bd1       About a minute ago   Exited              kube-apiserver            1                   8320ba5b50334
	* 2cabbaaffd4a5       303ce5db0e90d       About a minute ago   Running             etcd                      0                   6bf6c96020584
	* a1867d20ab33c       da26705ccb4b5       About a minute ago   Running             kube-controller-manager   2                   4a8285030cdd8
	* 479c983f54af9       76216c34ed0c7       About a minute ago   Running             kube-scheduler            1                   424546448e3ca
	* ee8718cef3dff       da26705ccb4b5       2 minutes ago        Exited              kube-controller-manager   1                   5d1f9410e3296
	* 1c713dd9e092b       67da37a9a360e       4 minutes ago        Exited              coredns                   0                   51baf5ede9f46
	* a361ad6db7959       67da37a9a360e       4 minutes ago        Exited              coredns                   0                   7bf176c59f007
	* 6be55225fd448       3439b7546f29b       4 minutes ago        Exited              kube-proxy                0                   6600cadc861db
	* fe5c5564b6a7d       76216c34ed0c7       4 minutes ago        Exited              kube-scheduler            0                   fd3d278b9d33d
	* 
	* ==> coredns [1c713dd9e092] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	* CoreDNS-1.6.7
	* linux/amd64, go1.13.6, da7f65b
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* I0609 18:22:52.942941       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.942109663 +0000 UTC m=+0.097909089) (total time: 30.000701863s):
	* Trace[2019727887]: [30.000701863s] [30.000701863s] END
	* E0609 18:22:52.943253       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:22:52.943340       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.942833181 +0000 UTC m=+0.098632573) (total time: 30.000438781s):
	* Trace[1427131847]: [30.000438781s] [30.000438781s] END
	* E0609 18:22:52.943366       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:22:52.943703       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.943128606 +0000 UTC m=+0.098928006) (total time: 30.000551532s):
	* Trace[939984059]: [30.000551532s] [30.000551532s] END
	* E0609 18:22:52.943760       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* 
	* ==> coredns [4452b2310264] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	* CoreDNS-1.6.7
	* linux/amd64, go1.13.6, da7f65b
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* I0609 18:25:27.872961       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:24:57.872292279 +0000 UTC m=+0.024712858) (total time: 30.000569629s):
	* Trace[2019727887]: [30.000569629s] [30.000569629s] END
	* E0609 18:25:27.873010       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:25:27.872975       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:24:57.872351839 +0000 UTC m=+0.024772395) (total time: 30.000590668s):
	* Trace[1427131847]: [30.000590668s] [30.000590668s] END
	* E0609 18:25:27.873035       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:25:27.873067       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:24:57.87228283 +0000 UTC m=+0.024703419) (total time: 30.00074529s):
	* Trace[939984059]: [30.00074529s] [30.00074529s] END
	* E0609 18:25:27.873085       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* 
	* ==> coredns [a361ad6db795] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	* CoreDNS-1.6.7
	* linux/amd64, go1.13.6, da7f65b
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] SIGTERM: Shutting down servers then terminating
	* [INFO] plugin/health: Going into lameduck mode for 5s
	* I0609 18:22:52.856067       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.855291768 +0000 UTC m=+0.093347899) (total time: 30.000590736s):
	* Trace[2019727887]: [30.000590736s] [30.000590736s] END
	* E0609 18:22:52.856117       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:22:52.856176       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.855366805 +0000 UTC m=+0.093422959) (total time: 30.000560175s):
	* Trace[1427131847]: [30.000560175s] [30.000560175s] END
	* E0609 18:22:52.856193       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:22:52.856434       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:22:22.855355551 +0000 UTC m=+0.093411680) (total time: 30.001054642s):
	* Trace[939984059]: [30.001054642s] [30.001054642s] END
	* E0609 18:22:52.856453       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* 
	* ==> coredns [abebf43541dc] <==
	* .:53
	* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	* CoreDNS-1.6.7
	* linux/amd64, go1.13.6, da7f65b
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	* I0609 18:25:27.871600       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:24:57.870853526 +0000 UTC m=+0.023226393) (total time: 30.000567573s):
	* Trace[2019727887]: [30.000567573s] [30.000567573s] END
	* E0609 18:25:27.871650       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:25:27.871678       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:24:57.870850996 +0000 UTC m=+0.023223796) (total time: 30.000616965s):
	* Trace[1427131847]: [30.000616965s] [30.000616965s] END
	* E0609 18:25:27.871694       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* I0609 18:25:27.871733       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-06-09 18:24:57.870963856 +0000 UTC m=+0.023336711) (total time: 30.000548676s):
	* Trace[939984059]: [30.000548676s] [30.000548676s] END
	* E0609 18:25:27.871748       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* 
	* ==> describe nodes <==
	* Name:               multinode-20200609112134-5469
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=multinode-20200609112134-5469
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=b72d7683536818416863536d77e7e628181d7fce
	*                     minikube.k8s.io/name=multinode-20200609112134-5469
	*                     minikube.k8s.io/updated_at=2020_06_09T11_21_59_0700
	*                     minikube.k8s.io/version=v1.11.0
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Tue, 09 Jun 2020 18:21:55 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  multinode-20200609112134-5469
	*   AcquireTime:     <unset>
	*   RenewTime:       Tue, 09 Jun 2020 18:26:26 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Tue, 09 Jun 2020 18:24:56 +0000   Tue, 09 Jun 2020 18:21:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Tue, 09 Jun 2020 18:24:56 +0000   Tue, 09 Jun 2020 18:21:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Tue, 09 Jun 2020 18:24:56 +0000   Tue, 09 Jun 2020 18:21:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Tue, 09 Jun 2020 18:24:56 +0000   Tue, 09 Jun 2020 18:22:09 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.2
	*   Hostname:    multinode-20200609112134-5469
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 9d8755a0c8c04a59b3004d28ea6c92cf
	*   System UUID:                94b56d30-a98d-4485-869b-9d805fe1b047
	*   Boot ID:                    64f3ac6d-30f2-41fc-bc23-3cf0dad66462
	*   Kernel Version:             4.9.0-12-amd64
	*   OS Image:                   Ubuntu 19.10
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.2
	*   Kubelet Version:            v1.18.3
	*   Kube-Proxy Version:         v1.18.3
	* PodCIDR:                      10.244.0.0/24
	* PodCIDRs:                     10.244.0.0/24
	* Non-terminated Pods:          (9 in total)
	*   Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	*   kube-system                 coredns-66bff467f8-2ptph                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m10s
	*   kube-system                 coredns-66bff467f8-8lp4d                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m10s
	*   kube-system                 etcd-multinode-20200609112134-5469                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	*   kube-system                 kindnet-jq8cp                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m9s
	*   kube-system                 kube-apiserver-multinode-20200609112134-5469             250m (3%)     0 (0%)      0 (0%)           0 (0%)         94s
	*   kube-system                 kube-controller-manager-multinode-20200609112134-5469    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m30s
	*   kube-system                 kube-proxy-wcwvr                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	*   kube-system                 kube-scheduler-multinode-20200609112134-5469             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m30s
	*   kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                850m (10%)  100m (1%)
	*   memory             190Mi (0%)  390Mi (1%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	*   hugepages-1Gi      0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                    From                                       Message
	*   ----    ------                   ----                   ----                                       -------
	*   Normal  NodeHasSufficientMemory  4m41s (x5 over 4m42s)  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    4m41s (x4 over 4m42s)  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     4m41s (x4 over 4m42s)  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasSufficientPID
	*   Normal  Starting                 4m31s                  kubelet, multinode-20200609112134-5469     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  4m31s                  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    4m31s                  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     4m31s                  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasSufficientPID
	*   Normal  NodeNotReady             4m31s                  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeNotReady
	*   Normal  NodeAllocatableEnforced  4m30s                  kubelet, multinode-20200609112134-5469     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                4m21s                  kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeReady
	*   Normal  Starting                 4m8s                   kube-proxy, multinode-20200609112134-5469  Starting kube-proxy.
	*   Normal  Starting                 104s                   kubelet, multinode-20200609112134-5469     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  104s (x8 over 104s)    kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    104s (x8 over 104s)    kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     104s (x7 over 104s)    kubelet, multinode-20200609112134-5469     Node multinode-20200609112134-5469 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  104s                   kubelet, multinode-20200609112134-5469     Updated Node Allocatable limit across pods
	*   Normal  Starting                 84s                    kube-proxy, multinode-20200609112134-5469  Starting kube-proxy.
	* 
	* 
	* Name:               multinode-20200609112134-5469-m02
	* Roles:              <none>
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=multinode-20200609112134-5469-m02
	*                     kubernetes.io/os=linux
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Tue, 09 Jun 2020 18:22:36 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  multinode-20200609112134-5469-m02
	*   AcquireTime:     <unset>
	*   RenewTime:       Tue, 09 Jun 2020 18:26:27 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Tue, 09 Jun 2020 18:26:28 +0000   Tue, 09 Jun 2020 18:26:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Tue, 09 Jun 2020 18:26:28 +0000   Tue, 09 Jun 2020 18:26:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Tue, 09 Jun 2020 18:26:28 +0000   Tue, 09 Jun 2020 18:26:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Tue, 09 Jun 2020 18:26:28 +0000   Tue, 09 Jun 2020 18:26:28 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  172.17.0.4
	*   Hostname:    multinode-20200609112134-5469-m02
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 a78ab306632f46e1b8fb8eecca06762d
	*   System UUID:                a9fa822c-f7e0-4f19-89dc-3d1b1622c186
	*   Boot ID:                    64f3ac6d-30f2-41fc-bc23-3cf0dad66462
	*   Kernel Version:             4.9.0-12-amd64
	*   OS Image:                   Ubuntu 19.10
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.2
	*   Kubelet Version:            v1.18.3
	*   Kube-Proxy Version:         v1.18.3
	* PodCIDR:                      10.244.1.0/24
	* PodCIDRs:                     10.244.1.0/24
	* Non-terminated Pods:          (2 in total)
	*   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	*   kube-system                 kindnet-hf42h       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m54s
	*   kube-system                 kube-proxy-h2pgs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                100m (1%)  100m (1%)
	*   memory             50Mi (0%)  50Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                    From                                           Message
	*   ----    ------                   ----                   ----                                           -------
	*   Normal  NodeHasSufficientMemory  3m54s (x2 over 3m54s)  kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    3m54s (x2 over 3m54s)  kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     3m54s (x2 over 3m54s)  kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  3m54s                  kubelet, multinode-20200609112134-5469-m02     Updated Node Allocatable limit across pods
	*   Normal  Starting                 3m54s                  kubelet, multinode-20200609112134-5469-m02     Starting kubelet.
	*   Normal  Starting                 3m52s                  kube-proxy, multinode-20200609112134-5469-m02  Starting kube-proxy.
	*   Normal  NodeReady                3m44s                  kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeReady
	*   Normal  NodeAllocatableEnforced  3s                     kubelet, multinode-20200609112134-5469-m02     Updated Node Allocatable limit across pods
	*   Normal  Starting                 3s                     kubelet, multinode-20200609112134-5469-m02     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  2s (x2 over 3s)        kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    2s (x2 over 3s)        kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     2s (x2 over 3s)        kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeHasSufficientPID
	*   Normal  NodeReady                2s                     kubelet, multinode-20200609112134-5469-m02     Node multinode-20200609112134-5469-m02 status is now: NodeReady
	*   Normal  Starting                 2s                     kube-proxy, multinode-20200609112134-5469-m02  Starting kube-proxy.
	* 
	* 
	* Name:               multinode-20200609112134-5469-m03
	* Roles:              <none>
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=multinode-20200609112134-5469-m03
	*                     kubernetes.io/os=linux
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Tue, 09 Jun 2020 18:22:51 +0000
	* Taints:             node.kubernetes.io/unreachable:NoExecute
	*                     node.kubernetes.io/unreachable:NoSchedule
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  multinode-20200609112134-5469-m03
	*   AcquireTime:     <unset>
	*   RenewTime:       Tue, 09 Jun 2020 18:23:02 +0000
	* Conditions:
	*   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	*   ----             ------    -----------------                 ------------------                ------              -------
	*   MemoryPressure   Unknown   Tue, 09 Jun 2020 18:23:01 +0000   Tue, 09 Jun 2020 18:26:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	*   DiskPressure     Unknown   Tue, 09 Jun 2020 18:23:01 +0000   Tue, 09 Jun 2020 18:26:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	*   PIDPressure      Unknown   Tue, 09 Jun 2020 18:23:01 +0000   Tue, 09 Jun 2020 18:26:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	*   Ready            Unknown   Tue, 09 Jun 2020 18:23:01 +0000   Tue, 09 Jun 2020 18:26:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	* Addresses:
	*   InternalIP:  172.17.0.6
	*   Hostname:    multinode-20200609112134-5469-m03
	* Capacity:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                8
	*   ephemeral-storage:  515928484Ki
	*   hugepages-1Gi:      0
	*   hugepages-2Mi:      0
	*   memory:             30887012Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 2978e76130cd4f979bf6877ff4937bb0
	*   System UUID:                ffaa34ed-1840-42b4-ab6c-4420418946f2
	*   Boot ID:                    64f3ac6d-30f2-41fc-bc23-3cf0dad66462
	*   Kernel Version:             4.9.0-12-amd64
	*   OS Image:                   Ubuntu 19.10
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.2
	*   Kubelet Version:            v1.18.3
	*   Kube-Proxy Version:         v1.18.3
	* PodCIDR:                      10.244.2.0/24
	* PodCIDRs:                     10.244.2.0/24
	* Non-terminated Pods:          (2 in total)
	*   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	*   kube-system                 kindnet-zbv8n       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m39s
	*   kube-system                 kube-proxy-ndttk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests   Limits
	*   --------           --------   ------
	*   cpu                100m (1%)  100m (1%)
	*   memory             50Mi (0%)  50Mi (0%)
	*   ephemeral-storage  0 (0%)     0 (0%)
	*   hugepages-1Gi      0 (0%)     0 (0%)
	*   hugepages-2Mi      0 (0%)     0 (0%)
	* Events:
	*   Type    Reason                   Age                    From                                           Message
	*   ----    ------                   ----                   ----                                           -------
	*   Normal  Starting                 3m39s                  kubelet, multinode-20200609112134-5469-m03     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  3m39s (x2 over 3m39s)  kubelet, multinode-20200609112134-5469-m03     Node multinode-20200609112134-5469-m03 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    3m39s (x2 over 3m39s)  kubelet, multinode-20200609112134-5469-m03     Node multinode-20200609112134-5469-m03 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     3m39s (x2 over 3m39s)  kubelet, multinode-20200609112134-5469-m03     Node multinode-20200609112134-5469-m03 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  3m39s                  kubelet, multinode-20200609112134-5469-m03     Updated Node Allocatable limit across pods
	*   Normal  NodeReady                3m29s                  kubelet, multinode-20200609112134-5469-m03     Node multinode-20200609112134-5469-m03 status is now: NodeReady
	*   Normal  Starting                 3m28s                  kube-proxy, multinode-20200609112134-5469-m03  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [  +0.440086] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
	* [  +0.011675] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
	* [  +0.026654] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10
	* [  +0.029986] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
	* [  +3.127438] systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway.
	* [ +12.512952] vboxdrv: loading out-of-tree module taints kernel.
	* [  +0.284944] VBoxNetFlt: Successfully started.
	* [  +0.021543] VBoxNetAdp: Successfully started.
	* [Jun 9 17:37] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +14.156107] cgroup: cgroup2: unknown option "nsdelegate"
	* [  +3.682657] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 17:38] IPv4: martian source 10.1.0.3 from 10.1.0.3, on dev mybridge
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff a6 15 2d b4 60 a1 08 06        ........-.`...
	* [  +0.006604] IPv4: martian source 10.1.0.2 from 10.1.0.2, on dev mybridge
	* [  +0.000004] ll header: 00000000: ff ff ff ff ff ff 0a 03 cb 6c cf ba 08 06        .........l....
	* [Jun 9 17:39] IPv4: martian source 10.1.0.2 from 10.1.0.2, on dev mybridge
	* [  +0.000003] ll header: 00000000: ff ff ff ff ff ff 26 1e c4 0d 86 16 08 06        ......&.......
	* [  +6.307972] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:19] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:21] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:22] cgroup: cgroup2: unknown option "nsdelegate"
	* [ +14.385288] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:23] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:24] cgroup: cgroup2: unknown option "nsdelegate"
	* [Jun 9 18:26] cgroup: cgroup2: unknown option "nsdelegate"
	* 
	* ==> etcd [2cabbaaffd4a] <==
	* 2020-06-09 18:24:51.184655 I | etcdserver: restarting member 40fd14fa28910cab in cluster a6ea9ad1b116d02f at commit index 858
	* raft2020/06/09 18:24:51 INFO: 40fd14fa28910cab switched to configuration voters=()
	* raft2020/06/09 18:24:51 INFO: 40fd14fa28910cab became follower at term 2
	* raft2020/06/09 18:24:51 INFO: newRaft 40fd14fa28910cab [peers: [], term: 2, commit: 858, applied: 0, lastindex: 858, lastterm: 2]
	* 2020-06-09 18:24:51.190614 W | auth: simple token is not cryptographically signed
	* 2020-06-09 18:24:51.192709 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	* raft2020/06/09 18:24:51 INFO: 40fd14fa28910cab switched to configuration voters=(4682922252190157995)
	* 2020-06-09 18:24:51.193591 I | etcdserver/membership: added member 40fd14fa28910cab [https://172.17.0.4:2380] to cluster a6ea9ad1b116d02f
	* 2020-06-09 18:24:51.193753 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-06-09 18:24:51.193804 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-06-09 18:24:51.195645 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-06-09 18:24:51.195785 I | embed: listening for peers on 172.17.0.2:2380
	* 2020-06-09 18:24:51.196389 I | embed: listening for metrics on http://127.0.0.1:2381
	* raft2020/06/09 18:24:52 INFO: 40fd14fa28910cab is starting a new election at term 2
	* raft2020/06/09 18:24:52 INFO: 40fd14fa28910cab became candidate at term 3
	* raft2020/06/09 18:24:52 INFO: 40fd14fa28910cab received MsgVoteResp from 40fd14fa28910cab at term 3
	* raft2020/06/09 18:24:52 INFO: 40fd14fa28910cab became leader at term 3
	* raft2020/06/09 18:24:52 INFO: raft.node: 40fd14fa28910cab elected leader 40fd14fa28910cab at term 3
	* 2020-06-09 18:24:52.386878 I | embed: ready to serve client requests
	* 2020-06-09 18:24:52.386941 I | etcdserver: published {Name:multinode-20200609112134-5469 ClientURLs:[https://172.17.0.2:2379]} to cluster a6ea9ad1b116d02f
	* 2020-06-09 18:24:52.386959 I | embed: ready to serve client requests
	* 2020-06-09 18:24:52.388516 I | embed: serving client requests on 172.17.0.2:2379
	* 2020-06-09 18:24:52.388606 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-06-09 18:24:56.047594 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-multinode-20200609112134-5469\" " with result "range_response_count:1 size:3791" took too long (103.950102ms) to execute
	* 2020-06-09 18:24:56.056783 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:0 size:5" took too long (107.892568ms) to execute
	* 
	* ==> kernel <==
	*  18:26:30 up  1:09,  0 users,  load average: 1.21, 1.74, 1.45
	* Linux multinode-20200609112134-5469 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 x86_64 x86_64 GNU/Linux
	* PRETTY_NAME="Ubuntu 19.10"
	* 
	* ==> kube-apiserver [4e6aced09728] <==
	*       --max-connection-bytes-per-sec int          If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
	*       --proxy-client-cert-file string             Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
	*       --proxy-client-key-file string              Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
	*       --service-account-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
	*       --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.
	*       --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)
	* 
	* Global flags:
	* 
	*       --add-dir-header                   If true, adds the file directory to the header
	*       --alsologtostderr                  log to standard error as well as files
	*   -h, --help                             help for kube-apiserver
	*       --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
	*       --log-dir string                   If non-empty, write log files in this directory
	*       --log-file string                  If non-empty, use this log file
	*       --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
	*       --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)
	*       --logtostderr                      log to standard error instead of files (default true)
	*       --skip-headers                     If true, avoid header prefixes in the log messages
	*       --skip-log-headers                 If true, avoid headers when opening log files
	*       --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
	*   -v, --v Level                          number for the log level verbosity
	*       --version version[=true]           Print version information and quit
	*       --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging
	* 
	* 
	* ==> kube-apiserver [c7888b70b6f6] <==
	* I0609 18:25:06.137448       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	* I0609 18:25:06.137481       1 autoregister_controller.go:141] Starting autoregister controller
	* I0609 18:25:06.137487       1 cache.go:32] Waiting for caches to sync for autoregister controller
	* I0609 18:25:06.137521       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I0609 18:25:06.137556       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* I0609 18:25:06.138071       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	* I0609 18:25:06.138093       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	* I0609 18:25:06.239472       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	* I0609 18:25:06.249519       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* I0609 18:25:06.251906       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	* I0609 18:25:06.251965       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I0609 18:25:06.259270       1 cache.go:39] Caches are synced for autoregister controller
	* E0609 18:25:06.266288       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
	* I0609 18:25:06.338807       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I0609 18:25:07.134680       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I0609 18:25:07.134938       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I0609 18:25:07.139808       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	* W0609 18:25:07.373226       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
	* I0609 18:25:07.374584       1 controller.go:606] quota admission added evaluator for: endpoints
	* I0609 18:25:07.380055       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	* I0609 18:25:07.692711       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* I0609 18:25:07.710816       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I0609 18:25:07.757761       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I0609 18:25:07.779898       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I0609 18:25:07.789069       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* 
	* ==> kube-controller-manager [a1867d20ab33] <==
	* I0609 18:25:35.289968       1 shared_informer.go:230] Caches are synced for attach detach 
	* I0609 18:25:35.298028       1 shared_informer.go:230] Caches are synced for job 
	* I0609 18:25:35.307014       1 shared_informer.go:230] Caches are synced for stateful set 
	* I0609 18:25:35.339317       1 shared_informer.go:230] Caches are synced for PVC protection 
	* I0609 18:25:35.432456       1 shared_informer.go:230] Caches are synced for disruption 
	* I0609 18:25:35.432497       1 disruption.go:339] Sending events to api server.
	* I0609 18:25:35.440145       1 shared_informer.go:230] Caches are synced for ReplicationController 
	* I0609 18:25:35.539621       1 shared_informer.go:230] Caches are synced for taint 
	* I0609 18:25:35.539751       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	* I0609 18:25:35.539796       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* W0609 18:25:35.539896       1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200609112134-5469. Assuming now as a timestamp.
	* W0609 18:25:35.539954       1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200609112134-5469-m02. Assuming now as a timestamp.
	* W0609 18:25:35.539986       1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200609112134-5469-m03. Assuming now as a timestamp.
	* I0609 18:25:35.540059       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	* I0609 18:25:35.540056       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200609112134-5469", UID:"67411180-3bcb-457a-a87f-a3ecf4b10b7d", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200609112134-5469 event: Registered Node multinode-20200609112134-5469 in Controller
	* I0609 18:25:35.540092       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200609112134-5469-m02", UID:"5854fad2-afb5-4130-a37d-3d994601c305", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200609112134-5469-m02 event: Registered Node multinode-20200609112134-5469-m02 in Controller
	* I0609 18:25:35.540105       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200609112134-5469-m03", UID:"75a2c9bf-43b4-4580-acdd-aa669132c9d0", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200609112134-5469-m03 event: Registered Node multinode-20200609112134-5469-m03 in Controller
	* I0609 18:25:35.642403       1 shared_informer.go:230] Caches are synced for resource quota 
	* I0609 18:25:35.690500       1 shared_informer.go:230] Caches are synced for garbage collector 
	* I0609 18:25:35.690532       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0609 18:25:35.691096       1 shared_informer.go:230] Caches are synced for garbage collector 
	* I0609 18:25:36.390990       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	* I0609 18:25:36.391086       1 shared_informer.go:230] Caches are synced for resource quota 
	* I0609 18:26:15.550987       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200609112134-5469-m02", UID:"5854fad2-afb5-4130-a37d-3d994601c305", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node multinode-20200609112134-5469-m02 status is now: NodeNotReady
	* I0609 18:26:15.570948       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200609112134-5469-m03", UID:"75a2c9bf-43b4-4580-acdd-aa669132c9d0", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node multinode-20200609112134-5469-m03 status is now: NodeNotReady
	* 
	* ==> kube-controller-manager [ee8718cef3df] <==
	* I0609 18:24:21.159604       1 shared_informer.go:230] Caches are synced for taint 
	* I0609 18:24:21.159727       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	* I0609 18:24:21.159734       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* I0609 18:24:21.159878       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200609112134-5469-m03", UID:"75a2c9bf-43b4-4580-acdd-aa669132c9d0", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200609112134-5469-m03 event: Registered Node multinode-20200609112134-5469-m03 in Controller
	* I0609 18:24:21.159927       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200609112134-5469", UID:"67411180-3bcb-457a-a87f-a3ecf4b10b7d", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200609112134-5469 event: Registered Node multinode-20200609112134-5469 in Controller
	* I0609 18:24:21.159966       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"multinode-20200609112134-5469-m02", UID:"5854fad2-afb5-4130-a37d-3d994601c305", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node multinode-20200609112134-5469-m02 event: Registered Node multinode-20200609112134-5469-m02 in Controller
	* W0609 18:24:21.159907       1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200609112134-5469-m03. Assuming now as a timestamp.
	* W0609 18:24:21.160150       1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200609112134-5469. Assuming now as a timestamp.
	* W0609 18:24:21.160208       1 node_lifecycle_controller.go:1048] Missing timestamp for Node multinode-20200609112134-5469-m02. Assuming now as a timestamp.
	* I0609 18:24:21.160240       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	* I0609 18:24:21.169725       1 shared_informer.go:230] Caches are synced for daemon sets 
	* I0609 18:24:21.211289       1 shared_informer.go:230] Caches are synced for endpoint 
	* I0609 18:24:21.307876       1 shared_informer.go:230] Caches are synced for namespace 
	* I0609 18:24:21.318562       1 shared_informer.go:230] Caches are synced for service account 
	* I0609 18:24:21.547485       1 shared_informer.go:230] Caches are synced for stateful set 
	* I0609 18:24:21.548021       1 shared_informer.go:230] Caches are synced for attach detach 
	* I0609 18:24:21.553808       1 shared_informer.go:230] Caches are synced for PVC protection 
	* I0609 18:24:21.597166       1 shared_informer.go:230] Caches are synced for persistent volume 
	* I0609 18:24:21.627164       1 shared_informer.go:230] Caches are synced for job 
	* I0609 18:24:21.644062       1 shared_informer.go:230] Caches are synced for resource quota 
	* I0609 18:24:21.644532       1 shared_informer.go:230] Caches are synced for expand 
	* I0609 18:24:21.651423       1 shared_informer.go:230] Caches are synced for resource quota 
	* I0609 18:24:21.652322       1 shared_informer.go:230] Caches are synced for garbage collector 
	* I0609 18:24:21.652343       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I0609 18:24:21.744811       1 shared_informer.go:230] Caches are synced for garbage collector 
	* 
	* ==> kube-proxy [6be55225fd44] <==
	* W0609 18:22:22.755750       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	* I0609 18:22:22.776536       1 node.go:136] Successfully retrieved node IP: 172.17.0.4
	* I0609 18:22:22.776577       1 server_others.go:186] Using iptables Proxier.
	* I0609 18:22:22.839199       1 server.go:583] Version: v1.18.3
	* I0609 18:22:22.839911       1 conntrack.go:52] Setting nf_conntrack_max to 262144
	* I0609 18:22:22.840039       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0609 18:22:22.840115       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I0609 18:22:22.840467       1 config.go:133] Starting endpoints config controller
	* I0609 18:22:22.840496       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	* I0609 18:22:22.845212       1 config.go:315] Starting service config controller
	* I0609 18:22:22.845565       1 shared_informer.go:223] Waiting for caches to sync for service config
	* I0609 18:22:22.943154       1 shared_informer.go:230] Caches are synced for endpoints config 
	* I0609 18:22:22.945853       1 shared_informer.go:230] Caches are synced for service config 
	* 
	* ==> kube-proxy [9010faa64c19] <==
	* I0609 18:25:06.262031       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I0609 18:25:06.262085       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* E0609 18:25:06.262792       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* I0609 18:25:06.263730       1 config.go:133] Starting endpoints config controller
	* I0609 18:25:06.263757       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	* I0609 18:25:06.263796       1 config.go:315] Starting service config controller
	* I0609 18:25:06.263802       1 shared_informer.go:223] Waiting for caches to sync for service config
	* I0609 18:25:06.363956       1 shared_informer.go:230] Caches are synced for endpoints config 
	* I0609 18:25:06.363970       1 shared_informer.go:230] Caches are synced for service config 
	* E0609 18:25:11.263156       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:25:16.263509       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:25:21.263896       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:25:26.264261       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:25:31.264705       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:25:36.265093       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:25:41.265604       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:25:46.265957       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:25:51.266267       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:25:56.266661       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:26:01.266978       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:26:06.267342       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:26:11.267710       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:26:16.268056       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:26:21.268394       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* E0609 18:26:26.268795       1 server.go:621] starting metrics server failed: listen tcp 172.17.0.4:10249: bind: cannot assign requested address
	* 
	* ==> kube-scheduler [479c983f54af] <==
	* E0609 18:24:59.484431       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=812&timeout=7m27s&timeoutSeconds=447&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:24:59.615923       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=804: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:24:59.845383       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?resourceVersion=804: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:24:59.861006       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?resourceVersion=804: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:24:59.950556       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=804: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:25:00.436971       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=804: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:25:00.482056       1 reflector.go:382] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=813&timeoutSeconds=546&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:25:00.485150       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=812&timeout=9m45s&timeoutSeconds=585&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:25:00.502675       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=804: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:25:00.711269       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-scheduler: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:25:00.797928       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=804: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:25:01.483232       1 reflector.go:382] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=813&timeoutSeconds=363&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:25:01.486152       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=812&timeout=6m21s&timeoutSeconds=381&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused
	* E0609 18:25:06.163506       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0609 18:25:06.249373       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0609 18:25:06.339488       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0609 18:25:06.339592       1 reflector.go:382] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: unknown (get pods)
	* E0609 18:25:06.339728       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0609 18:25:06.339844       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0609 18:25:06.339949       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0609 18:25:06.340039       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0609 18:25:06.340096       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)
	* E0609 18:25:06.340173       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0609 18:25:06.340332       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-scheduler: endpoints "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
	* I0609 18:25:12.762403       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
	* 
	* ==> kube-scheduler [fe5c5564b6a7] <==
	* I0609 18:21:55.841627       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	* W0609 18:21:55.844511       1 authorization.go:47] Authorization is disabled
	* W0609 18:21:55.844620       1 authentication.go:40] Authentication is disabled
	* I0609 18:21:55.844651       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	* I0609 18:21:55.848222       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0609 18:21:55.848245       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	* I0609 18:21:55.849666       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	* I0609 18:21:55.849772       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	* E0609 18:21:55.855569       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0609 18:21:55.855938       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0609 18:21:55.855970       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E0609 18:21:55.856056       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0609 18:21:55.856088       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E0609 18:21:55.856164       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E0609 18:21:55.856249       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E0609 18:21:55.856315       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0609 18:21:55.856417       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E0609 18:21:56.760017       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E0609 18:21:56.842674       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E0609 18:21:56.915698       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E0609 18:21:56.962538       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E0609 18:21:56.988851       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* I0609 18:21:59.548530       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* I0609 18:21:59.650519       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
	* I0609 18:21:59.742662       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2020-06-09 18:24:40 UTC, end at Tue 2020-06-09 18:26:31 UTC. --
	* Jun 09 18:25:01 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:01.542275     535 reflector.go:382] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmultinode-20200609112134-5469&resourceVersion=812&timeoutSeconds=440&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused
	* Jun 09 18:25:01 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:01.558715     535 reflector.go:178] object-"kube-system"/"storage-provisioner-token-hhxgl": Failed to list *v1.Secret: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dstorage-provisioner-token-hhxgl&resourceVersion=804: dial tcp 172.17.0.2:8443: connect: connection refused
	* Jun 09 18:25:01 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:01.758801     535 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=804: dial tcp 172.17.0.2:8443: connect: connection refused
	* Jun 09 18:25:01 multinode-20200609112134-5469 kubelet[535]: W0609 18:25:01.958636     535 status_manager.go:556] Failed to get status for pod "coredns-66bff467f8-8lp4d_kube-system(31cce803-6b29-42aa-a42d-f4fbd2c0fac2)": Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-66bff467f8-8lp4d: dial tcp 172.17.0.2:8443: connect: connection refused
	* Jun 09 18:25:01 multinode-20200609112134-5469 kubelet[535]: I0609 18:25:01.959002     535 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 4e6aced09728dcb6d4ab390806dcb2abddf2c6ed436103138043bb071a5418dd
	* Jun 09 18:25:02 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:02.158649     535 reflector.go:178] object-"kube-system"/"kube-proxy-token-rglfc": Failed to list *v1.Secret: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkube-proxy-token-rglfc&resourceVersion=804: dial tcp 172.17.0.2:8443: connect: connection refused
	* Jun 09 18:25:06 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:06.151777     535 reflector.go:178] object-"kube-system"/"kindnet-token-h9nf8": Failed to list *v1.Secret: secrets "kindnet-token-h9nf8" is forbidden: User "system:node:multinode-20200609112134-5469" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "multinode-20200609112134-5469" and this object
	* Jun 09 18:25:06 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:06.151777     535 reflector.go:178] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:multinode-20200609112134-5469" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "multinode-20200609112134-5469" and this object
	* Jun 09 18:25:06 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:06.151816     535 reflector.go:178] object-"kube-system"/"coredns-token-c9nkp": Failed to list *v1.Secret: secrets "coredns-token-c9nkp" is forbidden: User "system:node:multinode-20200609112134-5469" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "multinode-20200609112134-5469" and this object
	* Jun 09 18:25:06 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:06.151849     535 reflector.go:178] object-"kube-system"/"storage-provisioner-token-hhxgl": Failed to list *v1.Secret: secrets "storage-provisioner-token-hhxgl" is forbidden: User "system:node:multinode-20200609112134-5469" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "multinode-20200609112134-5469" and this object
	* Jun 09 18:25:07 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:07.466283     535 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Jun 09 18:25:07 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:07.466322     535 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Jun 09 18:25:12 multinode-20200609112134-5469 kubelet[535]: I0609 18:25:12.349033     535 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: da36dd84ccec04931944218ddb1fbc260672f20c6adb290ac8100fbcca4135c0
	* Jun 09 18:25:17 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:17.481299     535 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Jun 09 18:25:17 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:17.481355     535 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Jun 09 18:25:27 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:27.494024     535 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Jun 09 18:25:27 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:27.494089     535 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Jun 09 18:25:28 multinode-20200609112134-5469 kubelet[535]: I0609 18:25:28.001953     535 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a5f49f42dcd654f0da81168f8580afa02fe844dcffa438f0c352e21d400d3de7
	* Jun 09 18:25:28 multinode-20200609112134-5469 kubelet[535]: I0609 18:25:28.002371     535 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7f709e9a5b2f81df4c71039cc800507992182cd05375b0058807d3198e98fbeb
	* Jun 09 18:25:28 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:28.002863     535 pod_workers.go:191] Error syncing pod ad7f51dd-d358-4e33-bada-06ae37019d42 ("storage-provisioner_kube-system(ad7f51dd-d358-4e33-bada-06ae37019d42)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ad7f51dd-d358-4e33-bada-06ae37019d42)"
	* Jun 09 18:25:37 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:37.505198     535 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	* Jun 09 18:25:37 multinode-20200609112134-5469 kubelet[535]: E0609 18:25:37.505240     535 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	* Jun 09 18:25:42 multinode-20200609112134-5469 kubelet[535]: I0609 18:25:42.348980     535 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7f709e9a5b2f81df4c71039cc800507992182cd05375b0058807d3198e98fbeb
	* Jun 09 18:25:46 multinode-20200609112134-5469 kubelet[535]: I0609 18:25:46.344406     535 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 949c1b35e2481c726ef99da93f192999d767e7e2505ce7065d83aea900a02c8b
	* Jun 09 18:25:46 multinode-20200609112134-5469 kubelet[535]: I0609 18:25:46.362600     535 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f3bdbb7b389ce3a6c8db4bdafd43590f19165cc982e33acef2fce3da48b21ca9
	* 
	* ==> storage-provisioner [7f709e9a5b2f] <==
	* F0609 18:25:27.780294       1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
	* 
	* ==> storage-provisioner [984910f542ea] <==

                                                
                                                
-- /stdout --
helpers.go:246: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20200609112134-5469 -n multinode-20200609112134-5469
helpers.go:253: (dbg) Run:  kubectl --context multinode-20200609112134-5469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers.go:259: non-running pods: kindnet-zbv8n
helpers.go:261: ======> post-mortem[TestMultiNode]: describe non-running pods <======
helpers.go:264: (dbg) Run:  kubectl --context multinode-20200609112134-5469 describe pod kindnet-zbv8n
helpers.go:264: (dbg) Non-zero exit: kubectl --context multinode-20200609112134-5469 describe pod kindnet-zbv8n: exit status 1 (119.610494ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kindnet-zbv8n" not found

                                                
                                                
** /stderr **
helpers.go:266: kubectl --context multinode-20200609112134-5469 describe pod kindnet-zbv8n: exit status 1
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20200609112134-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20200609112134-5469: (4.852087278s)

                                                
                                    
x
+
TestPreload (147.63s)

                                                
                                                
=== RUN   TestPreload
--- PASS: TestPreload (147.63s)
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20200609112637-5469 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20200609112637-5469 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: (1m12.73067077s)
preload_test.go:50: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20200609112637-5469 -- docker pull busybox
preload_test.go:60: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20200609112637-5469 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3
preload_test.go:60: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20200609112637-5469 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3: (1m11.004547314s)
preload_test.go:64: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20200609112637-5469 -- docker images
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20200609112637-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20200609112637-5469: (2.834783351s)

                                                
                                    
x
+
TestVersionUpgrade (288.28s)

                                                
                                                
=== RUN   TestVersionUpgrade
=== PAUSE TestVersionUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestVersionUpgrade
--- PASS: TestVersionUpgrade (288.28s)
version_upgrade_test.go:74: (dbg) Run:  /tmp/minikube-release.261392145.exe start -p vupgrade-20200609112904-5469 --memory=2200 --iso-url=https://storage.googleapis.com/minikube/iso/integration-test.iso --kubernetes-version=v1.13.0 --alsologtostderr --driver=docker 
version_upgrade_test.go:74: (dbg) Done: /tmp/minikube-release.261392145.exe start -p vupgrade-20200609112904-5469 --memory=2200 --iso-url=https://storage.googleapis.com/minikube/iso/integration-test.iso --kubernetes-version=v1.13.0 --alsologtostderr --driver=docker : (2m30.333804466s)
version_upgrade_test.go:83: (dbg) Run:  /tmp/minikube-release.261392145.exe stop -p vupgrade-20200609112904-5469
version_upgrade_test.go:83: (dbg) Done: /tmp/minikube-release.261392145.exe stop -p vupgrade-20200609112904-5469: (4.437608916s)
version_upgrade_test.go:88: (dbg) Run:  /tmp/minikube-release.261392145.exe -p vupgrade-20200609112904-5469 status --format={{.Host}}
version_upgrade_test.go:88: (dbg) Non-zero exit: /tmp/minikube-release.261392145.exe -p vupgrade-20200609112904-5469 status --format={{.Host}}: exit status 7 (174.338553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:90: status error: exit status 7 (may be ok)
version_upgrade_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p vupgrade-20200609112904-5469 --memory=2200 --kubernetes-version=v1.18.4-rc.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p vupgrade-20200609112904-5469 --memory=2200 --kubernetes-version=v1.18.4-rc.0 --alsologtostderr -v=1 --driver=docker : (1m53.424152917s)
version_upgrade_test.go:104: (dbg) Run:  kubectl --context vupgrade-20200609112904-5469 version --output=json
version_upgrade_test.go:123: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:125: (dbg) Run:  /tmp/minikube-release.261392145.exe start -p vupgrade-20200609112904-5469 --memory=2200 --kubernetes-version=v1.13.0 --driver=docker 
version_upgrade_test.go:125: (dbg) Non-zero exit: /tmp/minikube-release.261392145.exe start -p vupgrade-20200609112904-5469 --memory=2200 --kubernetes-version=v1.13.0 --driver=docker : exit status 78 (226.164947ms)

                                                
                                                
-- stdout --
	* [vupgrade-20200609112904-5469] minikube v1.11.0 on Debian 9.12
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube
	  - MINIKUBE_LOCATION=8417
	* Using the docker driver based on existing profile
	! You have selected Kubernetes 1.13.0, but the existing cluster is running Kubernetes 1.18.4-rc.0

                                                
                                                
-- /stdout --
** stderr ** 
	X Non-destructive downgrades are not supported, but you can proceed with one of the following options:
	
	  1) Recreate the cluster with Kubernetes 1.13.0, by running:
	
	    minikube delete -p vupgrade-20200609112904-5469
	    minikube start -p vupgrade-20200609112904-5469 --kubernetes-version=v1.13.0
	
	  2) Create a second cluster with Kubernetes 1.13.0, by running:
	
	    minikube start -p vupgrade-20200609112904-54692 --kubernetes-version=v1.13.0
	
	  3) Use the existing cluster at version Kubernetes 1.18.4-rc.0, by running:
	
	    minikube start -p vupgrade-20200609112904-5469 --kubernetes-version=v1.18.4-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:129: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p vupgrade-20200609112904-5469 --memory=2200 --kubernetes-version=v1.18.4-rc.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p vupgrade-20200609112904-5469 --memory=2200 --kubernetes-version=v1.18.4-rc.0 --alsologtostderr -v=1 --driver=docker : (14.357574225s)
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p vupgrade-20200609112904-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p vupgrade-20200609112904-5469: (4.536305258s)

                                                
                                    
x
+
TestPause/serial/Start (278.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
--- PASS: TestPause/serial/Start (278.67s)
pause_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20200609112904-5469 --memory=1800 --install-addons=false --wait=all --driver=docker 
pause_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p pause-20200609112904-5469 --memory=1800 --install-addons=false --wait=all --driver=docker : (4m38.671760807s)

                                                
                                    
x
+
TestFunctional/parallel/ComponentHealth (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ComponentHealth
=== PAUSE TestFunctional/parallel/ComponentHealth

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ComponentHealth
--- PASS: TestFunctional/parallel/ComponentHealth (0.58s)
functional_test.go:315: (dbg) Run:  kubectl --context functional-20200609111957-5469 get cs -o=json

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)
functional_test.go:570: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 config unset cpus
functional_test.go:570: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 config get cpus
functional_test.go:570: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20200609111957-5469 config get cpus: exit status 64 (61.205747ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:570: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 config set cpus 2
functional_test.go:570: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 config get cpus
functional_test.go:570: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 config unset cpus
functional_test.go:570: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 config get cpus
functional_test.go:570: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20200609111957-5469 config get cpus: exit status 64 (77.097049ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
2020/06/09 11:34:44 [DEBUG] GET http://127.0.0.1:38461/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/DashboardCmd (4.52s)
functional_test.go:386: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url -p functional-20200609111957-5469 --alsologtostderr -v=1]
functional_test.go:391: (dbg) stopping [out/minikube-linux-amd64 dashboard --url -p functional-20200609111957-5469 --alsologtostderr -v=1] ...
helpers.go:445: unable to kill pid 16669: os: process already finished

                                                
                                    
x
+
TestFunctional/parallel/DNS (9.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DNS
=== PAUSE TestFunctional/parallel/DNS

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DNS
--- PASS: TestFunctional/parallel/DNS (9.73s)
functional_test.go:426: (dbg) Run:  kubectl --context functional-20200609111957-5469 replace --force -f testdata/busybox.yaml
functional_test.go:431: (dbg) TestFunctional/parallel/DNS: waiting 4m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers.go:331: "busybox" [f3446a03-cbf5-4b63-b9e6-8f8b8d92aa07] Pending
helpers.go:331: "busybox" [f3446a03-cbf5-4b63-b9e6-8f8b8d92aa07] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers.go:331: "busybox" [f3446a03-cbf5-4b63-b9e6-8f8b8d92aa07] Running
functional_test.go:431: (dbg) TestFunctional/parallel/DNS: integration-test=busybox healthy within 9.041284679s
functional_test.go:437: (dbg) Run:  kubectl --context functional-20200609111957-5469 exec busybox nslookup kubernetes.default

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
--- PASS: TestFunctional/parallel/DryRun (0.66s)
functional_test.go:461: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20200609111957-5469 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:461: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20200609111957-5469 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 78 (334.362313ms)

                                                
                                                
-- stdout --
	* [functional-20200609111957-5469] minikube v1.11.0 on Debian 9.12
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-8417-2953-11096160fe2f8f3514641b2254ae78d1dc809e3d/.minikube
	  - MINIKUBE_LOCATION=8417
	* Using the docker driver based on existing profile

                                                
                                                
-- /stdout --
** stderr ** 
	I0609 11:34:36.600420   13706 start.go:98] hostinfo: {"hostname":"kvm-integration-slave7","uptime":4631,"bootTime":1591723045,"procs":571,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.12","kernelVersion":"4.9.0-12-amd64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"ae41e7f6-8b8e-4d40-b77d-1ebb5a2d5fdb"}
	I0609 11:34:36.601394   13706 start.go:108] virtualization: kvm host
	I0609 11:34:36.612158   13706 driver.go:260] Setting default libvirt URI to qemu:///system
	I0609 11:34:36.725126   13706 docker.go:95] docker version: linux-19.03.11
	I0609 11:34:36.728855   13706 start.go:214] selected driver: docker
	I0609 11:34:36.728869   13706 start.go:611] validating driver "docker" against &{Name:functional-20200609111957-5469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:2800 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:functional-20200609111957-5469 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: Feature
Gates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8441 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false dashboard:false default-storageclass:true efk:false freshpod:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false] VerifyComponents:map[apiserver:true apps_running:true default_sa:true system_pods:true]}
	I0609 11:34:36.729074   13706 start.go:622] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
	I0609 11:34:36.729091   13706 start.go:940] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
	X Requested memory allocation 250MB is less than the usable minimum of <no value>MB

                                                
                                                
** /stderr **
functional_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20200609111957-5469 --dry-run --alsologtostderr -v=1 --driver=docker 

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
--- PASS: TestFunctional/parallel/StatusCmd (2.13s)
functional_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 status
functional_test.go:347: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 status -o json

                                                
                                    
x
+
TestFunctional/parallel/LogsCmd (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsCmd
=== PAUSE TestFunctional/parallel/LogsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
--- PASS: TestFunctional/parallel/LogsCmd (3.32s)
functional_test.go:588: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 logs
functional_test.go:588: (dbg) Done: out/minikube-linux-amd64 -p functional-20200609111957-5469 logs: (3.319058631s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
--- PASS: TestFunctional/parallel/MountCmd (8.69s)
fn_mount_cmd.go:72: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20200609111957-5469 /tmp/mounttest309722672:/mount-9p --alsologtostderr -v=1]
fn_mount_cmd.go:106: wrote "test-1591727662264051244" to /tmp/mounttest309722672/created-by-test
fn_mount_cmd.go:106: wrote "test-1591727662264051244" to /tmp/mounttest309722672/created-by-test-removed-by-pod
fn_mount_cmd.go:106: wrote "test-1591727662264051244" to /tmp/mounttest309722672/test-1591727662264051244
fn_mount_cmd.go:114: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh "findmnt -T /mount-9p | grep 9p"
fn_mount_cmd.go:114: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (573.926979ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
fn_mount_cmd.go:114: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh "findmnt -T /mount-9p | grep 9p"
fn_mount_cmd.go:128: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh -- ls -la /mount-9p
fn_mount_cmd.go:132: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  9 18:34 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  9 18:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  9 18:34 test-1591727662264051244
fn_mount_cmd.go:136: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh cat /mount-9p/test-1591727662264051244
fn_mount_cmd.go:147: (dbg) Run:  kubectl --context functional-20200609111957-5469 replace --force -f testdata/busybox-mount-test.yaml
fn_mount_cmd.go:152: (dbg) TestFunctional/parallel/MountCmd: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers.go:331: "busybox-mount" [3eeb2c22-13c1-4a69-b1c1-e19af4f2523d] Pending
helpers.go:331: "busybox-mount" [3eeb2c22-13c1-4a69-b1c1-e19af4f2523d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers.go:331: "busybox-mount" [3eeb2c22-13c1-4a69-b1c1-e19af4f2523d] Running
helpers.go:331: "busybox-mount" [3eeb2c22-13c1-4a69-b1c1-e19af4f2523d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
fn_mount_cmd.go:152: (dbg) TestFunctional/parallel/MountCmd: integration-test=busybox-mount healthy within 4.011843701s
fn_mount_cmd.go:168: (dbg) Run:  kubectl --context functional-20200609111957-5469 logs busybox-mount
fn_mount_cmd.go:180: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh stat /mount-9p/created-by-test
fn_mount_cmd.go:180: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh stat /mount-9p/created-by-pod
fn_mount_cmd.go:89: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh "sudo umount -f /mount-9p"
fn_mount_cmd.go:93: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20200609111957-5469 /tmp/mounttest309722672:/mount-9p --alsologtostderr -v=1] ...

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (26.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
--- PASS: TestFunctional/parallel/ServiceCmd (26.81s)
functional_test.go:703: (dbg) Run:  kubectl --context functional-20200609111957-5469 create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
functional_test.go:707: (dbg) Run:  kubectl --context functional-20200609111957-5469 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:712: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers.go:331: "hello-node-7bf657c596-l8frw" [7049ef90-f26e-4efa-877b-edb35d495fa8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers.go:331: "hello-node-7bf657c596-l8frw" [7049ef90-f26e-4efa-877b-edb35d495fa8] Running
functional_test.go:712: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 23.028607758s
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 service list
functional_test.go:716: (dbg) Done: out/minikube-linux-amd64 -p functional-20200609111957-5469 service list: (1.673537797s)
functional_test.go:729: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 service --namespace=default --https --url hello-node
functional_test.go:738: found endpoint: https://172.17.0.3:30613
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 service hello-node --url --format={{.IP}}
functional_test.go:758: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 service hello-node --url
functional_test.go:764: found endpoint for hello-node: http://172.17.0.3:30613
functional_test.go:775: Attempting to fetch http://172.17.0.3:30613 ...
functional_test.go:794: http://172.17.0.3:30613: success! body:
CLIENT VALUES:
client_address=172.18.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://172.17.0.3:8080/

                                                
                                                
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

                                                
                                                
HEADERS RECEIVED:
accept-encoding=gzip
host=172.17.0.3:30613
user-agent=Go-http-client/1.1
BODY:
-no body in request-

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
--- PASS: TestFunctional/parallel/AddonsCmd (0.68s)
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 addons list
functional_test.go:820: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 addons list -o json

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (5.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (5.63s)
fn_pvc.go:42: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers.go:331: "storage-provisioner" [e85db741-3910-4f42-b886-7037ed79fffb] Running
fn_pvc.go:42: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011836333s
fn_pvc.go:47: (dbg) Run:  kubectl --context functional-20200609111957-5469 get storageclass -o=json
fn_pvc.go:67: (dbg) Run:  kubectl --context functional-20200609111957-5469 apply -f testdata/pvc.yaml
fn_pvc.go:73: (dbg) Run:  kubectl --context functional-20200609111957-5469 get pvc testpvc -o=json

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
--- PASS: TestFunctional/parallel/SSHCmd (1.00s)
functional_test.go:842: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh "echo hello"
functional_test.go:859: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh "cat /etc/hostname"

                                                
                                    
x
+
TestFunctional/parallel/MySQL (83.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
--- PASS: TestFunctional/parallel/MySQL (83.32s)
functional_test.go:877: (dbg) Run:  kubectl --context functional-20200609111957-5469 replace --force -f testdata/mysql.yaml
functional_test.go:882: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers.go:331: "mysql-78ff7d6cf9-4jtqf" [7031d0ce-1130-47d4-8da4-7c3bef146635] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers.go:331: "mysql-78ff7d6cf9-4jtqf" [7031d0ce-1130-47d4-8da4-7c3bef146635] Running
functional_test.go:882: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m8.155927364s
functional_test.go:889: (dbg) Run:  kubectl --context functional-20200609111957-5469 exec mysql-78ff7d6cf9-4jtqf -- mysql -ppassword -e "show databases;"
functional_test.go:889: (dbg) Non-zero exit: kubectl --context functional-20200609111957-5469 exec mysql-78ff7d6cf9-4jtqf -- mysql -ppassword -e "show databases;": exit status 1 (526.467892ms)

                                                
                                                
** stderr ** 
	Warning: Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:889: (dbg) Run:  kubectl --context functional-20200609111957-5469 exec mysql-78ff7d6cf9-4jtqf -- mysql -ppassword -e "show databases;"
functional_test.go:889: (dbg) Non-zero exit: kubectl --context functional-20200609111957-5469 exec mysql-78ff7d6cf9-4jtqf -- mysql -ppassword -e "show databases;": exit status 1 (417.051355ms)

                                                
                                                
** stderr ** 
	Warning: Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:889: (dbg) Run:  kubectl --context functional-20200609111957-5469 exec mysql-78ff7d6cf9-4jtqf -- mysql -ppassword -e "show databases;"
functional_test.go:889: (dbg) Non-zero exit: kubectl --context functional-20200609111957-5469 exec mysql-78ff7d6cf9-4jtqf -- mysql -ppassword -e "show databases;": exit status 1 (500.020117ms)

                                                
                                                
** stderr ** 
	Warning: Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:889: (dbg) Run:  kubectl --context functional-20200609111957-5469 exec mysql-78ff7d6cf9-4jtqf -- mysql -ppassword -e "show databases;"
functional_test.go:889: (dbg) Non-zero exit: kubectl --context functional-20200609111957-5469 exec mysql-78ff7d6cf9-4jtqf -- mysql -ppassword -e "show databases;": exit status 1 (373.492174ms)

                                                
                                                
** stderr ** 
	Warning: Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:889: (dbg) Run:  kubectl --context functional-20200609111957-5469 exec mysql-78ff7d6cf9-4jtqf -- mysql -ppassword -e "show databases;"
functional_test.go:889: (dbg) Non-zero exit: kubectl --context functional-20200609111957-5469 exec mysql-78ff7d6cf9-4jtqf -- mysql -ppassword -e "show databases;": exit status 1 (507.518677ms)

                                                
                                                
** stderr ** 
	Warning: Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:889: (dbg) Run:  kubectl --context functional-20200609111957-5469 exec mysql-78ff7d6cf9-4jtqf -- mysql -ppassword -e "show databases;"
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p functional-20200609111957-5469
helpers.go:170: (dbg) Done: out/minikube-linux-amd64 delete -p functional-20200609111957-5469: (5.439212772s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
--- PASS: TestFunctional/parallel/FileSync (4.28s)
functional_test.go:972: Checking for existence of /etc/test/nested/copy/5469/hosts within VM
functional_test.go:973: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh "sudo cat /etc/test/nested/copy/5469/hosts"
functional_test.go:973: (dbg) Done: out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh "sudo cat /etc/test/nested/copy/5469/hosts": (4.275982073s)
functional_test.go:978: file sync test content: Test file for checking file sync process

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
--- PASS: TestFunctional/parallel/CertSync (1.89s)
functional_test.go:1011: Checking for existence of /etc/ssl/certs/5469.pem within VM
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh "sudo cat /etc/ssl/certs/5469.pem"
functional_test.go:1011: Checking for existence of /usr/share/ca-certificates/5469.pem within VM
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh "sudo cat /usr/share/ca-certificates/5469.pem"
functional_test.go:1011: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd
=== PAUSE TestFunctional/parallel/UpdateContextCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd
--- PASS: TestFunctional/parallel/UpdateContextCmd (0.21s)
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20200609111957-5469 update-context --alsologtostderr -v=2

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
--- PASS: TestFunctional/parallel/NodeLabels (0.16s)
functional_test.go:142: (dbg) Run:  kubectl --context functional-20200609111957-5469 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (159.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (159.65s)
start_stop_delete_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20200609112907-5469 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0
start_stop_delete_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20200609112907-5469 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0: (2m39.650648148s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/FirstStart (280.53s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/FirstStart
--- PASS: TestStartStop/group/crio/serial/FirstStart (280.53s)
start_stop_delete_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p crio-20200609113014-5469 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7
start_stop_delete_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p crio-20200609113014-5469 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7: (4m40.533293351s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (245.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (245.28s)
start_stop_delete_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20200609113042-5469 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.18.3
start_stop_delete_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20200609113042-5469 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.18.3: (4m5.277640008s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/FirstStart (124.76s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/FirstStart
--- PASS: TestStartStop/group/containerd/serial/FirstStart (124.76s)
start_stop_delete_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p containerd-20200609113134-5469 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.18.3
start_stop_delete_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p containerd-20200609113134-5469 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.18.3: (2m4.755916538s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (39.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (39.41s)
start_stop_delete_test.go:158: (dbg) Run:  kubectl --context old-k8s-version-20200609112907-5469 create -f testdata/busybox.yaml
start_stop_delete_test.go:158: (dbg) Done: kubectl --context old-k8s-version-20200609112907-5469 create -f testdata/busybox.yaml: (1.207703615s)
start_stop_delete_test.go:158: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers.go:331: "busybox" [7ab08ef4-aa7f-11ea-bf3b-0242eaf57846] Pending
helpers.go:331: "busybox" [7ab08ef4-aa7f-11ea-bf3b-0242eaf57846] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers.go:331: "busybox" [7ab08ef4-aa7f-11ea-bf3b-0242eaf57846] Running
start_stop_delete_test.go:158: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 28.549390244s
start_stop_delete_test.go:158: (dbg) Run:  kubectl --context old-k8s-version-20200609112907-5469 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:158: (dbg) Done: kubectl --context old-k8s-version-20200609112907-5469 exec busybox -- /bin/sh -c "ulimit -n": (9.642890252s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.15s)
start_stop_delete_test.go:164: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20200609112907-5469 --alsologtostderr -v=3
start_stop_delete_test.go:164: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20200609112907-5469 --alsologtostderr -v=3: (15.148987136s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (88.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (88.07s)
start_stop_delete_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20200609113238-5469 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.18.4-rc.0
start_stop_delete_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20200609113238-5469 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.18.4-rc.0: (1m28.068868297s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)
start_stop_delete_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20200609112907-5469 -n old-k8s-version-20200609112907-5469
start_stop_delete_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20200609112907-5469 -n old-k8s-version-20200609112907-5469: exit status 7 (161.902782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:174: status error: exit status 7 (may be ok)
start_stop_delete_test.go:181: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20200609112907-5469

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (72.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (72.70s)
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20200609112907-5469 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0
start_stop_delete_test.go:190: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20200609112907-5469 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=docker  --kubernetes-version=v1.13.0: (1m11.884909592s)
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20200609112907-5469 -n old-k8s-version-20200609112907-5469

                                                
                                    
x
+
TestStartStop/group/containerd/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/DeployApp
--- PASS: TestStartStop/group/containerd/serial/DeployApp (9.41s)
start_stop_delete_test.go:158: (dbg) Run:  kubectl --context containerd-20200609113134-5469 create -f testdata/busybox.yaml
start_stop_delete_test.go:158: (dbg) TestStartStop/group/containerd/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers.go:331: "busybox" [0551fadc-3e51-48c8-ba30-0109dd5b4828] Pending
helpers.go:331: "busybox" [0551fadc-3e51-48c8-ba30-0109dd5b4828] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers.go:331: "busybox" [0551fadc-3e51-48c8-ba30-0109dd5b4828] Running
start_stop_delete_test.go:158: (dbg) TestStartStop/group/containerd/serial/DeployApp: integration-test=busybox healthy within 8.034997401s
start_stop_delete_test.go:158: (dbg) Run:  kubectl --context containerd-20200609113134-5469 exec busybox -- /bin/sh -c "ulimit -n"

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (21.85s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
--- PASS: TestPause/serial/SecondStartNoReconfiguration (21.85s)
pause_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20200609112904-5469 --alsologtostderr -v=1
pause_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p pause-20200609112904-5469 --alsologtostderr -v=1: (21.822992683s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/Stop (22.72s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/Stop
--- PASS: TestStartStop/group/containerd/serial/Stop (22.72s)
start_stop_delete_test.go:164: (dbg) Run:  out/minikube-linux-amd64 stop -p containerd-20200609113134-5469 --alsologtostderr -v=3
start_stop_delete_test.go:164: (dbg) Done: out/minikube-linux-amd64 stop -p containerd-20200609113134-5469 --alsologtostderr -v=3: (22.719118056s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)
fn_tunnel_cmd.go:122: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20200609111957-5469 tunnel --alsologtostderr]

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (32.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (32.05s)
start_stop_delete_test.go:208: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers.go:331: "kubernetes-dashboard-7dc6b4cf5b-vwwtv" [d41a4194-aa7f-11ea-80d2-024250055120] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers.go:331: "kubernetes-dashboard-7dc6b4cf5b-vwwtv" [d41a4194-aa7f-11ea-80d2-024250055120] Running
start_stop_delete_test.go:208: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 32.041709479s

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
--- PASS: TestPause/serial/Pause (0.82s)
pause_test.go:95: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20200609112904-5469 --alsologtostderr -v=5

                                                
                                    
x
+
TestPause/serial/Unpause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
--- PASS: TestPause/serial/Unpause (0.96s)
pause_test.go:105: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20200609112904-5469 --alsologtostderr -v=5

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.97s)
start_stop_delete_test.go:164: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20200609113238-5469 --alsologtostderr -v=3
start_stop_delete_test.go:164: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20200609113238-5469 --alsologtostderr -v=3: (2.971271956s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.09s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
--- PASS: TestPause/serial/PauseAgain (1.09s)
pause_test.go:95: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20200609112904-5469 --alsologtostderr -v=5
pause_test.go:95: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20200609112904-5469 --alsologtostderr -v=5: (1.091166421s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
--- PASS: TestPause/serial/DeletePaused (3.61s)
pause_test.go:115: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20200609112904-5469 --alsologtostderr -v=5
pause_test.go:115: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20200609112904-5469 --alsologtostderr -v=5: (3.607195118s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)
start_stop_delete_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20200609113238-5469 -n newest-cni-20200609113238-5469
start_stop_delete_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20200609113238-5469 -n newest-cni-20200609113238-5469: exit status 7 (190.941251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:174: status error: exit status 7 (may be ok)
start_stop_delete_test.go:181: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20200609113238-5469

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (65.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (65.76s)
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20200609113238-5469 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.18.4-rc.0
start_stop_delete_test.go:190: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20200609113238-5469 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.18.4-rc.0: (1m5.343050717s)
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20200609113238-5469 -n newest-cni-20200609113238-5469

                                                
                                    
x
+
TestStartStop/group/containerd/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/EnableAddonAfterStop
--- PASS: TestStartStop/group/containerd/serial/EnableAddonAfterStop (0.31s)
start_stop_delete_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20200609113134-5469 -n containerd-20200609113134-5469
start_stop_delete_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20200609113134-5469 -n containerd-20200609113134-5469: exit status 7 (155.812023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:174: status error: exit status 7 (may be ok)
start_stop_delete_test.go:181: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p containerd-20200609113134-5469

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
--- PASS: TestPause/serial/VerifyDeletedResources (1.47s)
pause_test.go:125: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:125: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.200212679s)
pause_test.go:151: (dbg) Run:  docker ps -a
pause_test.go:156: (dbg) Run:  docker volume inspect pause-20200609112904-5469
pause_test.go:156: (dbg) Non-zero exit: docker volume inspect pause-20200609112904-5469: exit status 1 (151.912605ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20200609112904-5469

                                                
                                                
** /stderr **
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20200609112904-5469

                                                
                                    
x
+
TestStartStop/group/containerd/serial/SecondStart (63.26s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/SecondStart
--- PASS: TestStartStop/group/containerd/serial/SecondStart (63.26s)
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 start -p containerd-20200609113134-5469 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.18.3
start_stop_delete_test.go:190: (dbg) Done: out/minikube-linux-amd64 start -p containerd-20200609113134-5469 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.18.3: (1m2.830871536s)
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20200609113134-5469 -n containerd-20200609113134-5469

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)
fn_tunnel_cmd.go:157: (dbg) Run:  kubectl --context functional-20200609111957-5469 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)
fn_tunnel_cmd.go:219: tunnel at http://10.103.208.22 is working!

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)
fn_tunnel_cmd.go:342: (dbg) stopping [out/minikube-linux-amd64 -p functional-20200609111957-5469 tunnel --alsologtostderr] ...

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.24s)
functional_test.go:604: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:608: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
functional_test.go:608: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.010806563s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.80s)
functional_test.go:629: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.69s)
functional_test.go:651: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.01s)
start_stop_delete_test.go:219: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers.go:331: "kubernetes-dashboard-7dc6b4cf5b-vwwtv" [d41a4194-aa7f-11ea-80d2-024250055120] Running
start_stop_delete_test.go:219: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009584441s

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.48s)
start_stop_delete_test.go:227: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20200609112907-5469 "sudo crictl images -o json"
start_stop_delete_test.go:227: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:227: Found non-minikube image: busybox:latest
start_stop_delete_test.go:227: Found non-minikube image: k8s.gcr.io/pause:latest

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.38s)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20200609112907-5469 --alsologtostderr -v=1
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20200609112907-5469 -n old-k8s-version-20200609112907-5469
start_stop_delete_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20200609112907-5469 -n old-k8s-version-20200609112907-5469: exit status 2 (474.665854ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:233: status error: exit status 2 (may be ok)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20200609112907-5469 -n old-k8s-version-20200609112907-5469
start_stop_delete_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20200609112907-5469 -n old-k8s-version-20200609112907-5469: exit status 2 (509.990965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:233: status error: exit status 2 (may be ok)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20200609112907-5469 --alsologtostderr -v=1
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20200609112907-5469 -n old-k8s-version-20200609112907-5469
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20200609112907-5469 -n old-k8s-version-20200609112907-5469
start_stop_delete_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p old-k8s-version-20200609112907-5469
start_stop_delete_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p old-k8s-version-20200609112907-5469: (3.417777619s)
start_stop_delete_test.go:131: (dbg) Run:  kubectl config get-contexts old-k8s-version-20200609112907-5469
start_stop_delete_test.go:131: (dbg) Non-zero exit: kubectl config get-contexts old-k8s-version-20200609112907-5469: exit status 1 (75.182495ms)

                                                
                                                
-- stdout --
	CURRENT   NAME   CLUSTER   AUTHINFO   NAMESPACE

                                                
                                                
-- /stdout --
** stderr ** 
	error: context old-k8s-version-20200609112907-5469 not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:133: config context error: exit status 1 (may be ok)
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p old-k8s-version-20200609112907-5469

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.85s)
start_stop_delete_test.go:158: (dbg) Run:  kubectl --context embed-certs-20200609113042-5469 create -f testdata/busybox.yaml
start_stop_delete_test.go:158: (dbg) Done: kubectl --context embed-certs-20200609113042-5469 create -f testdata/busybox.yaml: (2.523878225s)
start_stop_delete_test.go:158: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers.go:331: "busybox" [f968fdbd-87aa-484a-b6cb-697ff9dfb23a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers.go:331: "busybox" [f968fdbd-87aa-484a-b6cb-697ff9dfb23a] Running
start_stop_delete_test.go:158: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.014935646s
start_stop_delete_test.go:158: (dbg) Run:  kubectl --context embed-certs-20200609113042-5469 exec busybox -- /bin/sh -c "ulimit -n"

                                                
                                    
x
+
TestStartStop/group/crio/serial/DeployApp (8.72s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/DeployApp
--- PASS: TestStartStop/group/crio/serial/DeployApp (8.72s)
start_stop_delete_test.go:158: (dbg) Run:  kubectl --context crio-20200609113014-5469 create -f testdata/busybox.yaml
start_stop_delete_test.go:158: (dbg) TestStartStop/group/crio/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers.go:331: "busybox" [e29d609a-d2df-4d72-89ef-63f7027f8c41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers.go:331: "busybox" [e29d609a-d2df-4d72-89ef-63f7027f8c41] Running
start_stop_delete_test.go:158: (dbg) TestStartStop/group/crio/serial/DeployApp: integration-test=busybox healthy within 8.016808372s
start_stop_delete_test.go:158: (dbg) Run:  kubectl --context crio-20200609113014-5469 exec busybox -- /bin/sh -c "ulimit -n"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.36s)
start_stop_delete_test.go:164: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20200609113042-5469 --alsologtostderr -v=3
start_stop_delete_test.go:164: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20200609113042-5469 --alsologtostderr -v=3: (11.364423345s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/Stop (21.43s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/Stop
--- PASS: TestStartStop/group/crio/serial/Stop (21.43s)
start_stop_delete_test.go:164: (dbg) Run:  out/minikube-linux-amd64 stop -p crio-20200609113014-5469 --alsologtostderr -v=3
start_stop_delete_test.go:164: (dbg) Done: out/minikube-linux-amd64 stop -p crio-20200609113014-5469 --alsologtostderr -v=3: (21.426584626s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)
start_stop_delete_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20200609113042-5469 -n embed-certs-20200609113042-5469
start_stop_delete_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20200609113042-5469 -n embed-certs-20200609113042-5469: exit status 7 (122.996621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:174: status error: exit status 7 (may be ok)
start_stop_delete_test.go:181: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20200609113042-5469

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.88s)
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20200609113042-5469 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.18.3
start_stop_delete_test.go:190: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20200609113042-5469 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.18.3: (49.462868525s)
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20200609113042-5469 -n embed-certs-20200609113042-5469

                                                
                                    
x
+
TestStartStop/group/containerd/serial/UserAppExistsAfterStop (23.02s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/UserAppExistsAfterStop
--- PASS: TestStartStop/group/containerd/serial/UserAppExistsAfterStop (23.02s)
start_stop_delete_test.go:208: (dbg) TestStartStop/group/containerd/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers.go:331: "kubernetes-dashboard-696dbcc666-k8tzg" [5476d8b4-3ff4-4ccf-afd8-3a3508f3dbe9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers.go:331: "kubernetes-dashboard-696dbcc666-k8tzg" [5476d8b4-3ff4-4ccf-afd8-3a3508f3dbe9] Running
helpers.go:331: "kubernetes-dashboard-696dbcc666-k8tzg" [5476d8b4-3ff4-4ccf-afd8-3a3508f3dbe9] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers.go:331: "kubernetes-dashboard-696dbcc666-k8tzg" [5476d8b4-3ff4-4ccf-afd8-3a3508f3dbe9] Running
helpers.go:331: "kubernetes-dashboard-696dbcc666-k8tzg" [5476d8b4-3ff4-4ccf-afd8-3a3508f3dbe9] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:208: (dbg) TestStartStop/group/containerd/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 23.018372609s

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)
start_stop_delete_test.go:207: WARNING: cni mode requires additional setup before pods can schedule :(

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)
start_stop_delete_test.go:218: WARNING: cni mode requires additional setup before pods can schedule :(

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)
start_stop_delete_test.go:227: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20200609113238-5469 "sudo crictl images -o json"
start_stop_delete_test.go:227: Found non-minikube image: busybox:latest
start_stop_delete_test.go:227: Found non-minikube image: k8s.gcr.io/pause:latest

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.40s)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20200609113238-5469 --alsologtostderr -v=1
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20200609113238-5469 -n newest-cni-20200609113238-5469
start_stop_delete_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20200609113238-5469 -n newest-cni-20200609113238-5469: exit status 2 (396.083407ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:233: status error: exit status 2 (may be ok)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20200609113238-5469 -n newest-cni-20200609113238-5469
start_stop_delete_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20200609113238-5469 -n newest-cni-20200609113238-5469: exit status 2 (391.492316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:233: status error: exit status 2 (may be ok)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20200609113238-5469 --alsologtostderr -v=1
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20200609113238-5469 -n newest-cni-20200609113238-5469
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20200609113238-5469 -n newest-cni-20200609113238-5469
start_stop_delete_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p newest-cni-20200609113238-5469
start_stop_delete_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p newest-cni-20200609113238-5469: (3.078343042s)
start_stop_delete_test.go:131: (dbg) Run:  kubectl config get-contexts newest-cni-20200609113238-5469
start_stop_delete_test.go:131: (dbg) Non-zero exit: kubectl config get-contexts newest-cni-20200609113238-5469: exit status 1 (78.15154ms)

                                                
                                                
-- stdout --
	CURRENT   NAME   CLUSTER   AUTHINFO   NAMESPACE

                                                
                                                
-- /stdout --
** stderr ** 
	error: context newest-cni-20200609113238-5469 not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:133: config context error: exit status 1 (may be ok)
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p newest-cni-20200609113238-5469

                                                
                                    
x
+
TestStartStop/group/crio/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/EnableAddonAfterStop
--- PASS: TestStartStop/group/crio/serial/EnableAddonAfterStop (0.24s)
start_stop_delete_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20200609113014-5469 -n crio-20200609113014-5469
start_stop_delete_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p crio-20200609113014-5469 -n crio-20200609113014-5469: exit status 7 (120.503389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:174: status error: exit status 7 (may be ok)
start_stop_delete_test.go:181: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p crio-20200609113014-5469

                                                
                                    
x
+
TestStartStop/group/crio/serial/SecondStart (61.01s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/SecondStart
--- PASS: TestStartStop/group/crio/serial/SecondStart (61.01s)
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 start -p crio-20200609113014-5469 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7
start_stop_delete_test.go:190: (dbg) Done: out/minikube-linux-amd64 start -p crio-20200609113014-5469 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=docker  --kubernetes-version=v1.15.7: (1m0.524198357s)
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20200609113014-5469 -n crio-20200609113014-5469

                                                
                                    
x
+
TestStartStop/group/containerd/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/AddonExistsAfterStop
--- PASS: TestStartStop/group/containerd/serial/AddonExistsAfterStop (5.01s)
start_stop_delete_test.go:219: (dbg) TestStartStop/group/containerd/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers.go:331: "kubernetes-dashboard-696dbcc666-k8tzg" [5476d8b4-3ff4-4ccf-afd8-3a3508f3dbe9] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:219: (dbg) TestStartStop/group/containerd/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008288124s

                                                
                                    
x
+
TestStartStop/group/containerd/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/VerifyKubernetesImages
--- PASS: TestStartStop/group/containerd/serial/VerifyKubernetesImages (0.37s)
start_stop_delete_test.go:227: (dbg) Run:  out/minikube-linux-amd64 ssh -p containerd-20200609113134-5469 "sudo crictl images -o json"
start_stop_delete_test.go:227: Found non-minikube image: kindest/kindnetd:0.5.4
start_stop_delete_test.go:227: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:227: Found non-minikube image: library/busybox:latest
start_stop_delete_test.go:227: Found non-minikube image: k8s.gcr.io/pause:latest

                                                
                                    
x
+
TestStartStop/group/containerd/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/Pause
--- PASS: TestStartStop/group/containerd/serial/Pause (3.09s)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 pause -p containerd-20200609113134-5469 --alsologtostderr -v=1
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p containerd-20200609113134-5469 -n containerd-20200609113134-5469
start_stop_delete_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p containerd-20200609113134-5469 -n containerd-20200609113134-5469: exit status 2 (428.71018ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:233: status error: exit status 2 (may be ok)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p containerd-20200609113134-5469 -n containerd-20200609113134-5469
start_stop_delete_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p containerd-20200609113134-5469 -n containerd-20200609113134-5469: exit status 2 (428.347135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:233: status error: exit status 2 (may be ok)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 unpause -p containerd-20200609113134-5469 --alsologtostderr -v=1
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p containerd-20200609113134-5469 -n containerd-20200609113134-5469
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p containerd-20200609113134-5469 -n containerd-20200609113134-5469
start_stop_delete_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p containerd-20200609113134-5469
start_stop_delete_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p containerd-20200609113134-5469: (3.222925436s)
start_stop_delete_test.go:131: (dbg) Run:  kubectl config get-contexts containerd-20200609113134-5469
start_stop_delete_test.go:131: (dbg) Non-zero exit: kubectl config get-contexts containerd-20200609113134-5469: exit status 1 (88.759483ms)

                                                
                                                
-- stdout --
	CURRENT   NAME   CLUSTER   AUTHINFO   NAMESPACE

                                                
                                                
-- /stdout --
** stderr ** 
	error: context containerd-20200609113134-5469 not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:133: config context error: exit status 1 (may be ok)
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p containerd-20200609113134-5469

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (28.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (28.02s)
start_stop_delete_test.go:208: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers.go:331: "kubernetes-dashboard-696dbcc666-jc9w5" [6f6213dd-9176-4c0f-b70c-41d7a0c7b1b2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers.go:331: "kubernetes-dashboard-696dbcc666-jc9w5" [6f6213dd-9176-4c0f-b70c-41d7a0c7b1b2] Running
start_stop_delete_test.go:208: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 28.015259298s

                                                
                                    
x
+
TestStartStop/group/crio/serial/UserAppExistsAfterStop (51.02s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/UserAppExistsAfterStop
--- PASS: TestStartStop/group/crio/serial/UserAppExistsAfterStop (51.02s)
start_stop_delete_test.go:208: (dbg) TestStartStop/group/crio/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers.go:331: "kubernetes-dashboard-5b4bfff886-xjb2z" [9cfe1a01-dcd6-4daf-8a3f-59134e81ed7c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers.go:331: "kubernetes-dashboard-5b4bfff886-xjb2z" [9cfe1a01-dcd6-4daf-8a3f-59134e81ed7c] Running
start_stop_delete_test.go:208: (dbg) TestStartStop/group/crio/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 51.017008108s

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.01s)
start_stop_delete_test.go:219: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers.go:331: "kubernetes-dashboard-696dbcc666-jc9w5" [6f6213dd-9176-4c0f-b70c-41d7a0c7b1b2] Running
start_stop_delete_test.go:219: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007316541s

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)
start_stop_delete_test.go:227: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20200609113042-5469 "sudo crictl images -o json"
start_stop_delete_test.go:227: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:227: Found non-minikube image: busybox:latest
start_stop_delete_test.go:227: Found non-minikube image: k8s.gcr.io/pause:latest

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.70s)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20200609113042-5469 --alsologtostderr -v=1
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20200609113042-5469 -n embed-certs-20200609113042-5469
start_stop_delete_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20200609113042-5469 -n embed-certs-20200609113042-5469: exit status 2 (381.148291ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:233: status error: exit status 2 (may be ok)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20200609113042-5469 -n embed-certs-20200609113042-5469
start_stop_delete_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20200609113042-5469 -n embed-certs-20200609113042-5469: exit status 2 (379.436736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:233: status error: exit status 2 (may be ok)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20200609113042-5469 --alsologtostderr -v=1
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20200609113042-5469 -n embed-certs-20200609113042-5469
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20200609113042-5469 -n embed-certs-20200609113042-5469
start_stop_delete_test.go:233: (dbg) Done: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20200609113042-5469 -n embed-certs-20200609113042-5469: (1.208114611s)
start_stop_delete_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p embed-certs-20200609113042-5469
start_stop_delete_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p embed-certs-20200609113042-5469: (2.977758285s)
start_stop_delete_test.go:131: (dbg) Run:  kubectl config get-contexts embed-certs-20200609113042-5469
start_stop_delete_test.go:131: (dbg) Non-zero exit: kubectl config get-contexts embed-certs-20200609113042-5469: exit status 1 (56.961967ms)

                                                
                                                
-- stdout --
	CURRENT   NAME   CLUSTER   AUTHINFO   NAMESPACE

                                                
                                                
-- /stdout --
** stderr ** 
	error: context embed-certs-20200609113042-5469 not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:133: config context error: exit status 1 (may be ok)
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p embed-certs-20200609113042-5469

                                                
                                    
x
+
TestStartStop/group/crio/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/AddonExistsAfterStop
--- PASS: TestStartStop/group/crio/serial/AddonExistsAfterStop (5.01s)
start_stop_delete_test.go:219: (dbg) TestStartStop/group/crio/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers.go:331: "kubernetes-dashboard-5b4bfff886-xjb2z" [9cfe1a01-dcd6-4daf-8a3f-59134e81ed7c] Running
start_stop_delete_test.go:219: (dbg) TestStartStop/group/crio/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007644486s

                                                
                                    
x
+
TestStartStop/group/crio/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/VerifyKubernetesImages
--- PASS: TestStartStop/group/crio/serial/VerifyKubernetesImages (0.36s)
start_stop_delete_test.go:227: (dbg) Run:  out/minikube-linux-amd64 ssh -p crio-20200609113014-5469 "sudo crictl images -o json"
start_stop_delete_test.go:227: Found non-minikube image: kindest/kindnetd:0.5.4
start_stop_delete_test.go:227: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:227: Found non-minikube image: k8s.gcr.io/pause:latest
start_stop_delete_test.go:227: Found non-minikube image: busybox:latest

                                                
                                    
x
+
TestStartStop/group/crio/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/Pause
--- PASS: TestStartStop/group/crio/serial/Pause (3.13s)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 pause -p crio-20200609113014-5469 --alsologtostderr -v=1
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20200609113014-5469 -n crio-20200609113014-5469
start_stop_delete_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20200609113014-5469 -n crio-20200609113014-5469: exit status 2 (372.325644ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:233: status error: exit status 2 (may be ok)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20200609113014-5469 -n crio-20200609113014-5469
start_stop_delete_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20200609113014-5469 -n crio-20200609113014-5469: exit status 2 (373.060814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:233: status error: exit status 2 (may be ok)
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 unpause -p crio-20200609113014-5469 --alsologtostderr -v=1
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20200609113014-5469 -n crio-20200609113014-5469
start_stop_delete_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20200609113014-5469 -n crio-20200609113014-5469
start_stop_delete_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p crio-20200609113014-5469
start_stop_delete_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p crio-20200609113014-5469: (3.043029494s)
start_stop_delete_test.go:131: (dbg) Run:  kubectl config get-contexts crio-20200609113014-5469
start_stop_delete_test.go:131: (dbg) Non-zero exit: kubectl config get-contexts crio-20200609113014-5469: exit status 1 (62.7932ms)

                                                
                                                
-- stdout --
	CURRENT   NAME   CLUSTER   AUTHINFO   NAMESPACE

                                                
                                                
-- /stdout --
** stderr ** 
	error: context crio-20200609113014-5469 not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:133: config context error: exit status 1 (may be ok)
helpers.go:170: (dbg) Run:  out/minikube-linux-amd64 delete -p crio-20200609113014-5469

                                                
                                    

Test skip (6/128)

x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)
driver_install_or_update_test.go:102: Skip if not darwin.

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
--- SKIP: TestGvisorAddon (0.00s)
gvisor_addon_test.go:33: skipping test because --gvisor=false

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
--- SKIP: TestChangeNoneUser (0.00s)
none_test.go:38: Only test none driver.

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)
fn_tunnel_cmd.go:92: DNS forwarding is supported for darwin only now, skipping test DNS forwarding

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)
fn_tunnel_cmd.go:92: DNS forwarding is supported for darwin only now, skipping test DNS forwarding

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)
fn_tunnel_cmd.go:92: DNS forwarding is supported for darwin only now, skipping test DNS forwarding

                                                
                                    
Copied to clipboard