Started by upstream project "Build_Cross" build number 5733 originally caused by: GitHub pull request #3714 of commit daec030cdfb543e2314b0e13d7a55b0d5901de0a, no merge conflicts. [EnvInject] - Loading node environment variables. [EnvInject] - Preparing an environment for the build. [EnvInject] - Keeping Jenkins system variables. [EnvInject] - Keeping Jenkins build variables. [EnvInject] - Evaluating the Groovy script content [EnvInject] - Injecting contributions. Building remotely on GCP - Linux (kvm virtualbox linux skaffold docker gcp-linux) in workspace /home/jenkins/workspace/Linux_Integration_Tests_none [WS-CLEANUP] Deleting project workspace... [WS-CLEANUP] Deferred wipeout is used... [WS-CLEANUP] Done [Linux_Integration_Tests_none] $ /bin/bash -xe /tmp/jenkins185104187690532412.sh + set -e + gsutil -m cp -r gs://minikube-builds/3714/common.sh . Copying gs://minikube-builds/3714/common.sh... / [0/1 files][ 0.0 B/ 7.1 KiB] 0% Done / [1/1 files][ 7.1 KiB/ 7.1 KiB] 100% Done Operation completed over 1 objects/7.1 KiB. + gsutil cp gs://minikube-builds/3714/print-debug-info.sh . Copying gs://minikube-builds/3714/print-debug-info.sh... / [0 files][ 0.0 B/ 1.7 KiB] / [1 files][ 1.7 KiB/ 1.7 KiB] Operation completed over 1 objects/1.7 KiB. + gsutil cp gs://minikube-builds/3714/linux_integration_tests_none.sh . Copying gs://minikube-builds/3714/linux_integration_tests_none.sh... / [0 files][ 0.0 B/ 1.7 KiB] / [1 files][ 1.7 KiB/ 1.7 KiB] Operation completed over 1 objects/1.7 KiB. + bash -x linux_integration_tests_none.sh + set -e + OS_ARCH=linux-amd64 + VM_DRIVER=none + JOB_NAME=Linux-None + EXTRA_ARGS=--bootstrapper=kubeadm + SUDO_PREFIX='sudo -E ' + export KUBECONFIG=/root/.kube/config + KUBECONFIG=/root/.kube/config + sudo kubeadm reset [reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] are you sure you want to proceed? [y/N]: Aborted reset operation + sudo kubeadm reset -f [preflight] running pre-flight checks [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' W0322 14:07:15.339339 13111 reset.go:213] [reset] Unable to fetch the kubeadm-config ConfigMap, using etcd pod spec as fallback: failed to get config map: Get https://10.128.0.3:8443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 10.128.0.3:8443: connect: connection refused [reset] no etcd config found. Assuming external etcd [reset] please manually reset etcd to prevent further issues [reset] stopping the kubelet service [reset] unmounting mounted directories in "/var/lib/kubelet" [reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] [reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually. For example: iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. + sudo rm -rf '/data/*' + sudo rm -rf /etc/kubernetes/addons + sudo rm -rf '/var/lib/minikube/*' + systemctl is-active --quiet kubelet + source ./common.sh ++ readonly TEST_ROOT=/home/jenkins/minikube-integration ++ TEST_ROOT=/home/jenkins/minikube-integration ++ readonly TEST_HOME=/home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a ++ TEST_HOME=/home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a +++ date ++ echo '>> Starting at Fri Mar 22 14:07:15 UTC 2019' >> Starting at Fri Mar 22 14:07:15 UTC 2019 ++ echo '' ++ echo 'arch: linux-amd64' arch: linux-amd64 ++ echo 'build: 3714' build: 3714 ++ echo 'driver: none' driver: none ++ echo 'job: Linux-None' job: Linux-None ++ echo 'test home: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a' test home: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a ++ echo 'sudo: sudo -E ' sudo: sudo -E +++ uname -v ++ echo 'kernel: #1 SMP Debian 4.9.144-3.1 (2019-02-19)' kernel: #1 SMP Debian 4.9.144-3.1 (2019-02-19) +++ env KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a kubectl version --client --short=true ++ echo 'kubectl: Client Version: v1.10.0' kubectl: Client Version: v1.10.0 +++ docker version --format '{{ .Client.Version }}' ++ echo 'docker: 18.06.1-ce' docker: 18.06.1-ce ++ case "${VM_DRIVER}" in ++ echo '' ++ mkdir -p out/ testdata/ ++ type -P gsutil +++ pwd ++ PATH=/home/jenkins/workspace/Linux_Integration_Tests_none/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin ++ export PATH ++ echo '' ++ echo '>> Downloading test inputs from 3714 ...' >> Downloading test inputs from 3714 ... ++ gsutil -qm cp gs://minikube-builds/3714/minikube-linux-amd64 'gs://minikube-builds/3714/docker-machine-driver-*' gs://minikube-builds/3714/e2e-linux-amd64 out ++ gsutil -qm cp 'gs://minikube-builds/3714/testdata/*' testdata/ ++ export MINIKUBE_BIN=out/minikube-linux-amd64 ++ MINIKUBE_BIN=out/minikube-linux-amd64 ++ export E2E_BIN=out/e2e-linux-amd64 ++ E2E_BIN=out/e2e-linux-amd64 ++ chmod +x out/minikube-linux-amd64 out/e2e-linux-amd64 out/docker-machine-driver-hyperkit out/docker-machine-driver-hyperkit.d out/docker-machine-driver-kvm2 out/docker-machine-driver-kvm2.d +++ pgrep 'minikube-linux-amd64|e2e-linux-amd64' +++ true ++ procs= ++ [[ '' != '' ]] ++ echo '' ++ echo '>> Cleaning up after previous test runs ...' >> Cleaning up after previous test runs ... ++ for stale_dir in ${TEST_ROOT}/* ++ echo '* Cleaning stale test: /home/jenkins/minikube-integration/*' * Cleaning stale test: /home/jenkins/minikube-integration/* ++ export 'MINIKUBE_HOME=/home/jenkins/minikube-integration/*/.minikube' ++ MINIKUBE_HOME='/home/jenkins/minikube-integration/*/.minikube' ++ export 'KUBECONFIG=/home/jenkins/minikube-integration/*/kubeconfig' ++ KUBECONFIG='/home/jenkins/minikube-integration/*/kubeconfig' ++ [[ -d /home/jenkins/minikube-integration/*/.minikube ]] ++ [[ -f /home/jenkins/minikube-integration/*/kubeconfig ]] ++ rmdir '/home/jenkins/minikube-integration/*' rmdir: failed to remove '/home/jenkins/minikube-integration/*': No such file or directory ++ true ++ type -P virsh /usr/bin/virsh ++ virsh -c qemu:///system list --all Id Name State ---------------------------------------------------- ++ virsh -c qemu:///system list --all ++ grep minikube ++ xargs -I '{}' sh -c 'virsh -c qemu:///system destroy {}; virsh -c qemu:///system undefine {}' ++ awk '{ print $2 }' ++ type -P vboxmanage /usr/bin/vboxmanage ++ vboxmanage list vms ++ vboxmanage list vms ++ grep minikube ++ cut '-d"' -f2 ++ xargs -I '{}' sh -c 'vboxmanage startvm {} --type emergencystop; vboxmanage unregistervm {} --delete' ++ type -P hdiutil ++ [[ none == \h\y\p\e\r\k\i\t ]] +++ pgrep kubectl ++ kprocs='6908 19361' ++ [[ 6908 19361 != '' ]] ++ echo 'error: killing hung kubectl processes ...' error: killing hung kubectl processes ... ++ ps -f -p 6908 19361 UID PID PPID C STIME TTY STAT TIME CMD jenkins 6908 1 0 13:21 ? Sl 0:00 /usr/local/bin/kubectl --context minikube proxy --port=0 root 19361 1 0 13:11 ? Sl 0:00 /usr/local/bin/kubectl --context minikube proxy --port=0 ++ sudo -E kill 6908 19361 ++ cleanup_stale_routes ++ local 'show=netstat -rn -f inet' ++ local 'del=sudo route -n delete' +++ uname ++ [[ Linux == \L\i\n\u\x ]] ++ show='ip route show' ++ del='sudo ip route delete' +++ ip route show +++ awk '{ print $1 }' +++ grep 10.96.0.0 +++ true ++ local troutes= ++ mkdir -p /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a ++ export MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube ++ MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube ++ export MINIKUBE_WANTREPORTERRORPROMPT=False ++ MINIKUBE_WANTREPORTERRORPROMPT=False ++ export KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig ++ KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig ++ echo '' ++ echo '>> ISO URL' >> ISO URL ++ out/minikube-linux-amd64 start -h ++ grep iso-url --iso-url string Location of the minikube iso (default "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso") ++ echo '' +++ date ++ echo '>> Starting out/e2e-linux-amd64 at Fri Mar 22 14:07:19 UTC 2019' >> Starting out/e2e-linux-amd64 at Fri Mar 22 14:07:19 UTC 2019 ++ sudo -E out/e2e-linux-amd64 '-minikube-start-args=--vm-driver=none ' '-minikube-args=--v=10 --logtostderr --bootstrapper=kubeadm' -test.v -test.timeout=50m -binary=out/minikube-linux-amd64 === RUN TestDocker --- SKIP: TestDocker (0.00s) docker_test.go:32: skipping test as none driver does not bundle docker === RUN TestFunctional 14:07:19 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:07:19 | ! W0322 14:07:19.730467 14061 root.go:145] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/config/config.json: no such file or directory 14:07:19 | ! I0322 14:07:19.731058 14061 notify.go:126] Checking for updates... 14:07:19 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 start --vm-driver=none --v=10 --logtostderr --bootstrapper=kubeadm --alsologtostderr --v=2] 14:07:19 | ! W0322 14:07:19.836569 14071 root.go:145] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/config/config.json: no such file or directory 14:07:19 | ! I0322 14:07:19.836735 14071 notify.go:126] Checking for updates... 14:07:19 | > o minikube v0.35.0 on linux (amd64) 14:07:19 | > $ Downloading Kubernetes v1.13.4 images in the background ... 14:07:19 | ! I0322 14:07:19.907102 14071 start.go:605] Saving config: 14:07:19 | ! { 14:07:19 | ! "MachineConfig": { 14:07:19 | ! "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso", 14:07:19 | ! "Memory": 2048, 14:07:19 | ! "CPUs": 2, 14:07:19 | ! "DiskSize": 20000, 14:07:19 | ! "VMDriver": "none", 14:07:19 | ! "ContainerRuntime": "docker", 14:07:19 | ! "HyperkitVpnKitSock": "", 14:07:19 | ! "HyperkitVSockPorts": [], 14:07:19 | ! "XhyveDiskDriver": "ahci-hd", 14:07:19 | ! "DockerEnv": null, 14:07:19 | ! "InsecureRegistry": null, 14:07:19 | ! "RegistryMirror": null, 14:07:19 | ! "HostOnlyCIDR": "192.168.99.1/24", 14:07:19 | ! "HypervVirtualSwitch": "", 14:07:19 | ! "KvmNetwork": "default", 14:07:19 | ! "DockerOpt": null, 14:07:19 | ! "DisableDriverMounts": false, 14:07:19 | ! "NFSShare": [], 14:07:19 | ! "NFSSharesRoot": "/nfsshares", 14:07:19 | ! "UUID": "", 14:07:19 | ! "GPU": false, 14:07:19 | ! "NoVTXCheck": false 14:07:19 | ! }, 14:07:19 | ! "KubernetesConfig": { 14:07:19 | ! "KubernetesVersion": "v1.13.4", 14:07:19 | ! "NodeIP": "", 14:07:19 | ! "NodePort": 8443, 14:07:19 | ! "NodeName": "minikube", 14:07:19 | ! "APIServerName": "minikubeCA", 14:07:19 | ! "APIServerNames": null, 14:07:19 | ! "APIServerIPs": null, 14:07:19 | ! "DNSDomain": "cluster.local", 14:07:19 | ! "ContainerRuntime": "docker", 14:07:19 | ! "CRISocket": "", 14:07:19 | ! "NetworkPlugin": "", 14:07:19 | ! "FeatureGates": "", 14:07:19 | ! "ServiceCIDR": "10.96.0.0/12", 14:07:19 | ! "ImageRepository": "", 14:07:19 | ! "ExtraOptions": null, 14:07:19 | ! "ShouldLoadCachedImages": true, 14:07:19 | ! "EnableDefaultCNI": false 14:07:19 | ! } 14:07:19 | ! } 14:07:19 | ! I0322 14:07:19.907310 14071 cache_images.go:292] Attempting to cache image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 14:07:19 | ! I0322 14:07:19.907325 14071 cluster.go:68] Machine does not exist... provisioning new machine 14:07:19 | ! I0322 14:07:19.907335 14071 cluster.go:69] Provisioning machine with config: {MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso Memory:2048 CPUs:2 DiskSize:20000 VMDriver:none ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] XhyveDiskDriver:ahci-hd DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KvmNetwork:default Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: GPU:false NoVTXCheck:false} 14:07:19 | ! I0322 14:07:19.907337 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-apiserver-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 14:07:19 | > > Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! I0322 14:07:19.907531 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-proxy-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! I0322 14:07:19.907776 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-scheduler-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! I0322 14:07:19.907953 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/pause-amd64:3.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 14:07:19 | ! I0322 14:07:19.907977 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-controller-manager-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! I0322 14:07:19.908096 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/pause:3.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! I0322 14:07:19.908219 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! I0322 14:07:19.908290 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:07:19 | ! I0322 14:07:19.908297 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-addon-manager:v8.6 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! I0322 14:07:19.908382 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 14:07:19 | ! I0322 14:07:19.908385 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! I0322 14:07:19.908447 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/etcd-amd64:3.2.24 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! I0322 14:07:19.908773 14071 cache_images.go:292] Attempting to cache image: k8s.gcr.io/coredns:1.2.6 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 14:07:19 | ! 2019/03/22 14:07:19 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory 14:07:19 | ! 2019/03/22 14:07:19 No matching credentials were found, falling back on anonymous 14:07:19 | ! I0322 14:07:19.928736 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:07:19 | ! I0322 14:07:19.929324 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 14:07:19 | ! I0322 14:07:19.930076 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 14:07:19 | ! I0322 14:07:19.932221 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 14:07:19 | ! I0322 14:07:19.932287 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 14:07:19 | ! I0322 14:07:19.933289 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 14:07:19 | ! I0322 14:07:19.933867 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 14:07:19 | ! I0322 14:07:19.933916 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 14:07:19 | ! I0322 14:07:19.933923 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 14:07:19 | ! I0322 14:07:19.934718 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 14:07:19 | ! I0322 14:07:19.935828 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 14:07:19 | ! I0322 14:07:19.936269 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 14:07:19 | ! I0322 14:07:19.937284 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 14:07:19 | ! I0322 14:07:19.950468 14071 cache_images.go:316] OPENING: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 14:07:20 | > - "minikube" IP address is 10.128.0.3 14:07:20 | ! I0322 14:07:20.532630 14071 start.go:605] Saving config: 14:07:20 | ! { 14:07:20 | ! "MachineConfig": { 14:07:20 | ! "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso", 14:07:20 | ! "Memory": 2048, 14:07:20 | ! "CPUs": 2, 14:07:20 | ! "DiskSize": 20000, 14:07:20 | ! "VMDriver": "none", 14:07:20 | ! "ContainerRuntime": "docker", 14:07:20 | ! "HyperkitVpnKitSock": "", 14:07:20 | ! "HyperkitVSockPorts": [], 14:07:20 | ! "XhyveDiskDriver": "ahci-hd", 14:07:20 | ! "DockerEnv": null, 14:07:20 | ! "InsecureRegistry": null, 14:07:20 | ! "RegistryMirror": null, 14:07:20 | ! "HostOnlyCIDR": "192.168.99.1/24", 14:07:20 | ! "HypervVirtualSwitch": "", 14:07:20 | ! "KvmNetwork": "default", 14:07:20 | ! "DockerOpt": null, 14:07:20 | ! "DisableDriverMounts": false, 14:07:20 | ! "NFSShare": [], 14:07:20 | ! "NFSSharesRoot": "/nfsshares", 14:07:20 | ! "UUID": "", 14:07:20 | ! "GPU": false, 14:07:20 | ! "NoVTXCheck": false 14:07:20 | ! }, 14:07:20 | ! "KubernetesConfig": { 14:07:20 | ! "KubernetesVersion": "v1.13.4", 14:07:20 | ! "NodeIP": "10.128.0.3", 14:07:20 | ! "NodePort": 8443, 14:07:20 | ! "NodeName": "minikube", 14:07:20 | ! "APIServerName": "minikubeCA", 14:07:20 | ! "APIServerNames": null, 14:07:20 | ! "APIServerIPs": null, 14:07:20 | ! "DNSDomain": "cluster.local", 14:07:20 | ! "ContainerRuntime": "docker", 14:07:20 | ! "CRISocket": "", 14:07:20 | ! "NetworkPlugin": "", 14:07:20 | ! "FeatureGates": "", 14:07:20 | ! "ServiceCIDR": "10.96.0.0/12", 14:07:20 | ! "ImageRepository": "", 14:07:20 | ! "ExtraOptions": null, 14:07:20 | ! "ShouldLoadCachedImages": true, 14:07:20 | ! "EnableDefaultCNI": false 14:07:20 | ! } 14:07:20 | ! } 14:07:20 | ! I0322 14:07:20.533220 14071 exec_runner.go:39] Run: systemctl is-active --quiet service containerd 14:07:20 | > - Configuring Docker as the container runtime ... 14:07:20 | ! I0322 14:07:20.541250 14071 exec_runner.go:39] Run: systemctl is-active --quiet service crio 14:07:20 | ! I0322 14:07:20.546915 14071 exec_runner.go:39] Run: systemctl is-active --quiet service rkt-api 14:07:20 | ! I0322 14:07:20.555865 14071 exec_runner.go:39] Run: sudo systemctl restart docker 14:07:21 | ! I0322 14:07:21.850121 14071 cache_images.go:83] Successfully cached all images. 14:07:22 | ! I0322 14:07:22.291326 14071 exec_runner.go:50] Run with output: docker version --format '{{.Server.Version}}' 14:07:22 | > - Version of container runtime is 18.06.1-ce 14:07:22 | > - Preparing Kubernetes environment ... 14:07:22 | ! I0322 14:07:22.359497 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 14:07:22 | ! I0322 14:07:22.359510 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 14:07:22 | ! I0322 14:07:22.359531 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 14:07:22 | ! I0322 14:07:22.359526 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 14:07:22 | ! I0322 14:07:22.365770 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 14:07:22 | ! I0322 14:07:22.367075 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 14:07:22 | ! I0322 14:07:22.374593 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 14:07:22 | ! I0322 14:07:22.359514 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:07:22 | ! I0322 14:07:22.375763 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 14:07:22 | ! I0322 14:07:22.376866 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 14:07:22 | ! I0322 14:07:22.380792 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 14:07:22 | ! I0322 14:07:22.381694 14071 docker.go:89] Loading image: /tmp/pause-amd64_3.1 14:07:22 | ! I0322 14:07:22.392103 14071 exec_runner.go:39] Run: docker load -i /tmp/pause-amd64_3.1 14:07:22 | ! I0322 14:07:22.388857 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 14:07:22 | ! I0322 14:07:22.392063 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 14:07:22 | ! I0322 14:07:22.392079 14071 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 14:07:22 | ! I0322 14:07:22.605105 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/pause-amd64_3.1 14:07:22 | ! I0322 14:07:22.612004 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 from cache 14:07:22 | ! I0322 14:07:22.612045 14071 docker.go:89] Loading image: /tmp/k8s-dns-sidecar-amd64_1.14.8 14:07:22 | ! I0322 14:07:22.612054 14071 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-sidecar-amd64_1.14.8 14:07:22 | ! I0322 14:07:22.781797 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 14:07:22 | ! I0322 14:07:22.790496 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 from cache 14:07:22 | ! I0322 14:07:22.790535 14071 docker.go:89] Loading image: /tmp/pause_3.1 14:07:22 | ! I0322 14:07:22.790543 14071 exec_runner.go:39] Run: docker load -i /tmp/pause_3.1 14:07:22 | ! I0322 14:07:22.927694 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/pause_3.1 14:07:22 | ! I0322 14:07:22.934638 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache 14:07:22 | ! I0322 14:07:22.934678 14071 docker.go:89] Loading image: /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:07:22 | ! I0322 14:07:22.934687 14071 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:07:23 | ! I0322 14:07:23.098712 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:07:23 | ! I0322 14:07:23.108363 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 from cache 14:07:23 | ! I0322 14:07:23.108403 14071 docker.go:89] Loading image: /tmp/storage-provisioner_v1.8.1 14:07:23 | ! I0322 14:07:23.108412 14071 exec_runner.go:39] Run: docker load -i /tmp/storage-provisioner_v1.8.1 14:07:23 | ! I0322 14:07:23.282963 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/storage-provisioner_v1.8.1 14:07:23 | ! I0322 14:07:23.293492 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache 14:07:23 | ! I0322 14:07:23.293533 14071 docker.go:89] Loading image: /tmp/coredns_1.2.6 14:07:23 | ! I0322 14:07:23.293543 14071 exec_runner.go:39] Run: docker load -i /tmp/coredns_1.2.6 14:07:23 | ! I0322 14:07:23.470326 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/coredns_1.2.6 14:07:23 | ! I0322 14:07:23.479619 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 from cache 14:07:23 | ! I0322 14:07:23.479663 14071 docker.go:89] Loading image: /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:07:23 | ! I0322 14:07:23.479672 14071 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:07:23 | ! I0322 14:07:23.642042 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:07:23 | ! I0322 14:07:23.650839 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 from cache 14:07:23 | ! I0322 14:07:23.650879 14071 docker.go:89] Loading image: /tmp/kubernetes-dashboard-amd64_v1.10.1 14:07:23 | ! I0322 14:07:23.650887 14071 exec_runner.go:39] Run: docker load -i /tmp/kubernetes-dashboard-amd64_v1.10.1 14:07:23 | ! I0322 14:07:23.855863 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 14:07:23 | ! I0322 14:07:23.871142 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 from cache 14:07:23 | ! I0322 14:07:23.871198 14071 docker.go:89] Loading image: /tmp/kube-proxy-amd64_v1.13.4 14:07:23 | ! I0322 14:07:23.871212 14071 exec_runner.go:39] Run: docker load -i /tmp/kube-proxy-amd64_v1.13.4 14:07:24 | ! I0322 14:07:24.057562 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-proxy-amd64_v1.13.4 14:07:24 | ! I0322 14:07:24.070256 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 from cache 14:07:24 | ! I0322 14:07:24.070307 14071 docker.go:89] Loading image: /tmp/kube-addon-manager_v8.6 14:07:24 | ! I0322 14:07:24.070317 14071 exec_runner.go:39] Run: docker load -i /tmp/kube-addon-manager_v8.6 14:07:24 | ! I0322 14:07:24.239576 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-addon-manager_v8.6 14:07:24 | ! I0322 14:07:24.251031 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 from cache 14:07:24 | ! I0322 14:07:24.251074 14071 docker.go:89] Loading image: /tmp/kube-controller-manager-amd64_v1.13.4 14:07:24 | ! I0322 14:07:24.251083 14071 exec_runner.go:39] Run: docker load -i /tmp/kube-controller-manager-amd64_v1.13.4 14:07:24 | ! I0322 14:07:24.443276 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 14:07:24 | ! I0322 14:07:24.458406 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 from cache 14:07:24 | ! I0322 14:07:24.458500 14071 docker.go:89] Loading image: /tmp/kube-scheduler-amd64_v1.13.4 14:07:24 | ! I0322 14:07:24.458513 14071 exec_runner.go:39] Run: docker load -i /tmp/kube-scheduler-amd64_v1.13.4 14:07:24 | ! I0322 14:07:24.648170 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-scheduler-amd64_v1.13.4 14:07:24 | ! I0322 14:07:24.660945 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 from cache 14:07:24 | ! I0322 14:07:24.660990 14071 docker.go:89] Loading image: /tmp/kube-apiserver-amd64_v1.13.4 14:07:24 | ! I0322 14:07:24.661000 14071 exec_runner.go:39] Run: docker load -i /tmp/kube-apiserver-amd64_v1.13.4 14:07:24 | ! I0322 14:07:24.871219 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-apiserver-amd64_v1.13.4 14:07:24 | ! I0322 14:07:24.887812 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 from cache 14:07:24 | ! I0322 14:07:24.887856 14071 docker.go:89] Loading image: /tmp/etcd-amd64_3.2.24 14:07:24 | ! I0322 14:07:24.887865 14071 exec_runner.go:39] Run: docker load -i /tmp/etcd-amd64_3.2.24 14:07:25 | ! I0322 14:07:25.106817 14071 exec_runner.go:39] Run: sudo rm -rf /tmp/etcd-amd64_3.2.24 14:07:25 | ! I0322 14:07:25.125712 14071 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 from cache 14:07:25 | ! I0322 14:07:25.125761 14071 cache_images.go:109] Successfully loaded all cached images. 14:07:25 | ! I0322 14:07:25.126090 14071 kubeadm.go:452] kubelet v1.13.4 config: 14:07:25 | ! [Unit] 14:07:25 | ! Wants=docker.socket 14:07:25 | ! [Service] 14:07:25 | ! ExecStart= 14:07:25 | ! ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests 14:07:25 | ! [Install] 14:07:25 | > @ Downloading kubeadm v1.13.4 14:07:25 | > @ Downloading kubelet v1.13.4 14:07:25 | ! I0322 14:07:25.861778 14071 exec_runner.go:39] Run: 14:07:25 | ! sudo systemctl daemon-reload && 14:07:25 | ! sudo systemctl enable kubelet && 14:07:25 | ! sudo systemctl start kubelet 14:07:25 | ! I0322 14:07:25.989976 14071 certs.go:46] Setting up certificates for IP: 10.128.0.3 14:07:27 | > : Waiting for image downloads to complete ... 14:07:27 | ! I0322 14:07:27.412450 14071 kubeconfig.go:127] Using kubeconfig: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig 14:07:27 | > - Pulling images required by Kubernetes v1.13.4 ... 14:07:27 | ! I0322 14:07:27.412971 14071 exec_runner.go:39] Run: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml 14:07:28 | > - Launching Kubernetes v1.13.4 using kubeadm ... 14:07:28 | ! I0322 14:07:28.668255 14071 exec_runner.go:50] Run with output: 14:07:28 | ! sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI 14:08:03 | ! I0322 14:08:03.149476 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ... 14:08:03 | ! I0322 14:08:03.165798 14071 kubernetes.go:134] Found 0 Pods for label selector component=kube-apiserver 14:09:08 | ! I0322 14:09:08.170847 14071 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver 14:09:10 | ! I0322 14:09:10.170858 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:09:10 | ! I0322 14:09:10.174727 14071 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:09:10 | ! I0322 14:09:10.174815 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ... 14:09:10 | ! I0322 14:09:10.178594 14071 kubernetes.go:134] Found 1 Pods for label selector component=etcd 14:09:10 | ! I0322 14:09:10.178675 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ... 14:09:10 | ! I0322 14:09:10.181891 14071 kubernetes.go:134] Found 0 Pods for label selector component=kube-scheduler 14:09:13 | ! I0322 14:09:13.185614 14071 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler 14:09:20 | ! I0322 14:09:20.185751 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ... 14:09:20 | ! I0322 14:09:20.188818 14071 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager 14:09:20 | ! I0322 14:09:20.188881 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-addon-manager" ... 14:09:20 | ! I0322 14:09:20.191776 14071 kubernetes.go:134] Found 1 Pods for label selector component=kube-addon-manager 14:09:20 | ! I0322 14:09:20.191822 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ... 14:09:20 | ! I0322 14:09:20.194754 14071 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns 14:09:20 | > : Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns 14:09:20 | > - Configuring cluster permissions ... 14:09:20 | ! I0322 14:09:20.204791 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ... 14:09:20 | ! I0322 14:09:20.207432 14071 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver 14:09:20 | ! I0322 14:09:20.207461 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:09:20 | ! I0322 14:09:20.210880 14071 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:09:20 | ! I0322 14:09:20.210918 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ... 14:09:20 | ! I0322 14:09:20.214592 14071 kubernetes.go:134] Found 1 Pods for label selector component=etcd 14:09:20 | ! I0322 14:09:20.214621 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ... 14:09:20 | ! I0322 14:09:20.217669 14071 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler 14:09:20 | ! I0322 14:09:20.217696 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ... 14:09:20 | ! I0322 14:09:20.220268 14071 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager 14:09:20 | ! I0322 14:09:20.220300 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-addon-manager" ... 14:09:20 | ! I0322 14:09:20.222798 14071 kubernetes.go:134] Found 1 Pods for label selector component=kube-addon-manager 14:09:20 | ! I0322 14:09:20.222824 14071 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ... 14:09:20 | ! I0322 14:09:20.225519 14071 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns 14:09:20 | ! I0322 14:09:20.225567 14071 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:09:20 | ! I0322 14:09:20.244049 14071 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:09:20 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc000375c40 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a2600 TLS:0xc0000c82c0} 14:09:20 | > - Verifying component health ..... 14:09:20 | > > Configuring local host environment ... 14:09:20 | ! ! The 'none' driver provides limited isolation and may reduce system security and reliability. 14:09:20 | ! ! For more information, see: 14:09:20 | > - https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md 14:09:20 | ! ! kubectl and minikube configuration will be stored in /home/jenkins 14:09:20 | ! ! To use kubectl or minikube commands as your own user, you may 14:09:20 | ! ! need to relocate them. For example, to overwrite your own settings: 14:09:20 | > - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME 14:09:20 | > - sudo chown -R $USER $HOME/.kube $HOME/.minikube 14:09:20 | > i This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true 14:09:20 | > + kubectl is now configured to use "minikube" 14:09:20 | > = Done! Thank you for using minikube! 14:09:20 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:09:20 | ! W0322 14:09:20.281206 17137 root.go:145] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/config/config.json: no such file or directory 14:09:20 | ! I0322 14:09:20.281354 17137 notify.go:126] Checking for updates... 14:09:20 | ! I0322 14:09:20.355609 17137 none.go:231] checking for running kubelet ... 14:09:20 | ! I0322 14:09:20.355632 17137 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:09:20 | ! I0322 14:09:20.362096 17137 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:09:20 | ! I0322 14:09:20.374694 17137 interface.go:360] Looking for default routes with IPv4 addresses 14:09:20 | ! I0322 14:09:20.374713 17137 interface.go:365] Default route transits interface "eth0" 14:09:20 | ! I0322 14:09:20.374963 17137 interface.go:174] Interface eth0 is up 14:09:20 | ! I0322 14:09:20.375032 17137 interface.go:222] Interface "eth0" has 1 addresses :[10.128.0.3/32]. 14:09:20 | ! I0322 14:09:20.375056 17137 interface.go:189] Checking addr 10.128.0.3/32. 14:09:20 | ! I0322 14:09:20.375065 17137 interface.go:196] IP found 10.128.0.3 14:09:20 | ! I0322 14:09:20.375073 17137 interface.go:228] Found valid IPv4 address 10.128.0.3 for interface "eth0". 14:09:20 | ! I0322 14:09:20.375080 17137 interface.go:371] Found active IP 10.128.0.3 14:09:20 | ! I0322 14:09:20.381534 17137 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:09:20 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc000425ec0 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004a3300 TLS:0xc00058a630} 14:09:20 | > Running === RUN TestFunctional/Status === RUN TestFunctional/DNS === PAUSE TestFunctional/DNS === RUN TestFunctional/Logs === PAUSE TestFunctional/Logs === RUN TestFunctional/Addons === PAUSE TestFunctional/Addons === RUN TestFunctional/Dashboard === PAUSE TestFunctional/Dashboard === RUN TestFunctional/ServicesList === PAUSE TestFunctional/ServicesList === RUN TestFunctional/Provisioning === PAUSE TestFunctional/Provisioning === RUN TestFunctional/Tunnel 14:09:20 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 tunnel --alsologtostderr -v 8 --logtostderr] 14:09:20 | ! W0322 14:09:20.816898 17163 root.go:145] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/config/config.json: no such file or directory 14:09:20 | ! I0322 14:09:20.817041 17163 notify.go:126] Checking for updates... 14:09:20 | ! I0322 14:09:20.921991 17163 tunnel.go:55] Creating docker machine client... 14:09:20 | ! I0322 14:09:20.922023 17163 tunnel.go:60] Creating k8s client... 14:09:20 | ! I0322 14:09:20.923100 17163 loader.go:359] Config loaded from file /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig 14:09:20 | ! I0322 14:09:20.925383 17163 none.go:231] checking for running kubelet ... 14:09:20 | ! I0322 14:09:20.925400 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:09:20 | ! I0322 14:09:20.932522 17163 interface.go:360] Looking for default routes with IPv4 addresses 14:09:20 | ! I0322 14:09:20.932542 17163 interface.go:365] Default route transits interface "eth0" 14:09:20 | ! I0322 14:09:20.932978 17163 interface.go:174] Interface eth0 is up 14:09:20 | ! I0322 14:09:20.933044 17163 interface.go:222] Interface "eth0" has 1 addresses :[10.128.0.3/32]. 14:09:20 | ! I0322 14:09:20.933062 17163 interface.go:189] Checking addr 10.128.0.3/32. 14:09:20 | ! I0322 14:09:20.933072 17163 interface.go:196] IP found 10.128.0.3 14:09:20 | ! I0322 14:09:20.933080 17163 interface.go:228] Found valid IPv4 address 10.128.0.3 for interface "eth0". 14:09:20 | ! I0322 14:09:20.933087 17163 interface.go:371] Found active IP 10.128.0.3 14:09:20 | ! I0322 14:09:20.933279 17163 tunnel_manager.go:65] Setting up tunnel... 14:09:20 | ! I0322 14:09:20.933309 17163 tunnel_manager.go:75] Started minikube tunnel. 14:09:20 | ! I0322 14:09:20.933335 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:09:20 | ! I0322 14:09:20.933342 17163 tunnel_manager.go:83] sleep for 5s === CONT TestFunctional/DNS === CONT TestFunctional/Dashboard === CONT TestFunctional/Addons === CONT TestFunctional/Provisioning === CONT TestFunctional/Logs 14:09:25 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:09:25 | ! W0322 14:09:25.328426 17478 root.go:145] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/config/config.json: no such file or directory 14:09:25 | ! I0322 14:09:25.331380 17478 notify.go:126] Checking for updates... 14:09:25 | ! I0322 14:09:25.452562 17478 none.go:231] checking for running kubelet ... 14:09:25 | ! I0322 14:09:25.452737 17478 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:09:25 | ! I0322 14:09:25.459652 17478 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:09:25 | ! - Enabling dashboard ... 14:09:25 | ! I0322 14:09:25.481260 17478 interface.go:360] Looking for default routes with IPv4 addresses 14:09:25 | ! I0322 14:09:25.481325 17478 interface.go:365] Default route transits interface "eth0" 14:09:25 | ! I0322 14:09:25.481602 17478 interface.go:174] Interface eth0 is up 14:09:25 | ! I0322 14:09:25.481651 17478 interface.go:222] Interface "eth0" has 1 addresses :[10.128.0.3/32]. 14:09:25 | ! I0322 14:09:25.481668 17478 interface.go:189] Checking addr 10.128.0.3/32. 14:09:25 | ! I0322 14:09:25.481674 17478 interface.go:196] IP found 10.128.0.3 14:09:25 | ! I0322 14:09:25.481679 17478 interface.go:228] Found valid IPv4 address 10.128.0.3 for interface "eth0". 14:09:25 | ! I0322 14:09:25.481684 17478 interface.go:371] Found active IP 10.128.0.3 14:09:25 | ! I0322 14:09:25.487736 17478 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:09:25 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc000434040 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000330d00 TLS:0xc0000d0b00} 14:09:25 | > Running 14:09:25 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:09:25 | ! - Verifying dashboard health ... 14:09:25 | ! I0322 14:09:25.529628 17522 notify.go:126] Checking for updates... 14:09:25 | ! I0322 14:09:25.611214 17522 none.go:231] checking for running kubelet ... 14:09:25 | ! I0322 14:09:25.611329 17522 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:09:25 | ! I0322 14:09:25.627366 17522 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:09:25 | ! I0322 14:09:25.641710 17522 interface.go:360] Looking for default routes with IPv4 addresses 14:09:25 | ! I0322 14:09:25.641745 17522 interface.go:365] Default route transits interface "eth0" 14:09:25 | ! I0322 14:09:25.641962 17522 interface.go:174] Interface eth0 is up 14:09:25 | ! I0322 14:09:25.642029 17522 interface.go:222] Interface "eth0" has 1 addresses :[10.128.0.3/32]. 14:09:25 | ! I0322 14:09:25.642046 17522 interface.go:189] Checking addr 10.128.0.3/32. 14:09:25 | ! I0322 14:09:25.642052 17522 interface.go:196] IP found 10.128.0.3 14:09:25 | ! I0322 14:09:25.642058 17522 interface.go:228] Found valid IPv4 address 10.128.0.3 for interface "eth0". 14:09:25 | ! I0322 14:09:25.642063 17522 interface.go:371] Found active IP 10.128.0.3 14:09:25 | ! I0322 14:09:25.648760 17522 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:09:25 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc00037c940 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043db00 TLS:0xc00002f3f0} 14:09:25 | > Running 14:09:25 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 logs --v=10 --logtostderr --bootstrapper=kubeadm] 14:09:25 | ! I0322 14:09:25.699136 17537 notify.go:126] Checking for updates... 14:09:25 | ! I0322 14:09:25.791884 17537 exec_runner.go:50] Run with output: docker ps -a --filter="name=k8s_kube-apiserver" --format="{{.ID}}" 14:09:25 | ! I0322 14:09:25.871080 17537 logs.go:152] 1 containers: [6e967224b5b5] 14:09:25 | ! I0322 14:09:25.871141 17537 exec_runner.go:50] Run with output: docker ps -a --filter="name=k8s_coredns" --format="{{.ID}}" 14:09:25 | ! I0322 14:09:25.933593 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:09:25 | ! I0322 14:09:25.933660 17163 tunnel_manager.go:100] check received 14:09:25 | ! I0322 14:09:25.933670 17163 tunnel.go:119] updating tunnel status... 14:09:25 | ! I0322 14:09:25.934504 17163 none.go:231] checking for running kubelet ... 14:09:25 | ! I0322 14:09:25.934539 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:09:25 | ! I0322 14:09:25.941737 17163 tunnel.go:124] minikube is running, trying to add route10.96.0.0/12 -> 10.128.0.3 14:09:25 | ! I0322 14:09:25.943611 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:09:25 | ! I0322 14:09:25.945446 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:09:25 | ! I0322 14:09:25.945486 17163 route_linux.go:40] Adding route for CIDR 10.96.0.0/12 to gateway 10.128.0.3 14:09:25 | ! I0322 14:09:25.945519 17163 route_linux.go:42] About to run command: [sudo ip route add 10.96.0.0/12 via 10.128.0.3] 14:09:25 | ! I0322 14:09:25.947018 17537 logs.go:152] 2 containers: [44cdb7aac98e 02c0a6965e4a] 14:09:25 | ! I0322 14:09:25.947074 17537 exec_runner.go:50] Run with output: docker ps -a --filter="name=k8s_kube-scheduler" --format="{{.ID}}" 14:09:25 | ! I0322 14:09:25.953102 17163 route_linux.go:48] [] 14:09:25 | ! I0322 14:09:25.953138 17163 registry.go:77] registering tunnel: ID { Route: 10.96.0.0/12 -> 10.128.0.3, machineName: minikube, Pid: 17163 } 14:09:25 | ! I0322 14:09:25.953353 17163 registry.go:111] json marshalled: [ID { Route: 10.96.0.0/12 -> 10.128.0.3, machineName: minikube, Pid: 17163 }], [{"Route":{"Gateway":"10.128.0.3","DestCIDR":{"IP":"10.96.0.0","Mask":"//AAAA=="}},"MachineName":"minikube","Pid":17163}] 14:09:25 | ! I0322 14:09:25.953889 17163 round_trippers.go:383] GET https://10.128.0.3:8443/api/v1/services?timeout=1s 14:09:25 | ! I0322 14:09:25.953904 17163 round_trippers.go:390] Request Headers: 14:09:25 | ! I0322 14:09:25.953909 17163 round_trippers.go:393] Accept: application/json, */* 14:09:25 | ! I0322 14:09:25.953912 17163 round_trippers.go:393] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format 14:09:25 | ! I0322 14:09:25.966540 17163 round_trippers.go:408] Response Status: 200 OK in 12 milliseconds 14:09:25 | ! I0322 14:09:25.966563 17163 round_trippers.go:411] Response Headers: 14:09:25 | ! I0322 14:09:25.966568 17163 round_trippers.go:414] Content-Length: 2173 14:09:25 | ! I0322 14:09:25.966729 17163 round_trippers.go:414] Date: Fri, 22 Mar 2019 14:09:25 GMT 14:09:25 | ! I0322 14:09:25.966733 17163 round_trippers.go:414] Content-Type: application/json 14:09:25 | ! I0322 14:09:25.966800 17163 request.go:897] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"494"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"e64748b6-4cab-11e9-aacb-42010a800003","resourceVersion":"42","creationTimestamp":"2019-03-22T14:07:59Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"name":"https","protocol":"TCP","port":443,"targetPort":8443}],"clusterIP":"10.96.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"nginx-svc","namespace":"default","selfLink":"/api/v1/namespaces/default/services/nginx-svc","uid":"16b0d6f1-4cac-11e9-aacb-42010a800003","resourceVersion":"474","creationTimestamp":"2019-03-22T14:09:21Z","labels":{"run":"nginx-svc"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\": [truncated 1149 chars] 14:09:25 | ! I0322 14:09:25.968206 17163 loadbalancer_patcher.go:71] kubernetes is not type LoadBalancer, skipping. 14:09:25 | ! I0322 14:09:25.968220 17163 loadbalancer_patcher.go:74] nginx-svc is type LoadBalancer. 14:09:25 | ! I0322 14:09:25.968225 17163 loadbalancer_patcher.go:92] [nginx-svc] setting ClusterIP as the LoadBalancer Ingress 14:09:25 | ! I0322 14:09:25.968249 17163 request.go:897] Request Body: [{"op": "add", "path": "/status/loadBalancer/ingress", "value": [ { "ip": "10.100.116.174" } ] }] 14:09:25 | ! I0322 14:09:25.968313 17163 round_trippers.go:383] PATCH https://10.128.0.3:8443/api/v1/namespaces/default/services/nginx-svc/status?timeout=1s 14:09:25 | ! I0322 14:09:25.968318 17163 round_trippers.go:390] Request Headers: 14:09:25 | ! I0322 14:09:25.968322 17163 round_trippers.go:393] Content-Type: application/json-patch+json 14:09:25 | ! I0322 14:09:25.968326 17163 round_trippers.go:393] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format 14:09:25 | ! I0322 14:09:25.968330 17163 round_trippers.go:393] Accept: application/json, */* 14:09:25 | ! I0322 14:09:25.973781 17163 round_trippers.go:408] Response Status: 200 OK in 5 milliseconds 14:09:25 | ! I0322 14:09:25.973802 17163 round_trippers.go:411] Response Headers: 14:09:25 | ! I0322 14:09:25.973806 17163 round_trippers.go:414] Content-Type: application/json 14:09:25 | ! I0322 14:09:25.973810 17163 round_trippers.go:414] Content-Length: 986 14:09:25 | ! I0322 14:09:25.973813 17163 round_trippers.go:414] Date: Fri, 22 Mar 2019 14:09:25 GMT 14:09:25 | ! I0322 14:09:25.973864 17163 request.go:897] Response Body: {"kind":"Service","apiVersion":"v1","metadata":{"name":"nginx-svc","namespace":"default","selfLink":"/api/v1/namespaces/default/services/nginx-svc/status","uid":"16b0d6f1-4cac-11e9-aacb-42010a800003","resourceVersion":"495","creationTimestamp":"2019-03-22T14:09:21Z","labels":{"run":"nginx-svc"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\":\"nginx-svc\",\"namespace\":\"default\"},\"spec\":{\"ports\":[{\"port\":80,\"protocol\":\"TCP\",\"targetPort\":80}],\"selector\":{\"run\":\"nginx-svc\"},\"sessionAffinity\":\"None\",\"type\":\"LoadBalancer\"}}\n"}},"spec":{"ports":[{"protocol":"TCP","port":80,"targetPort":80,"nodePort":31069}],"selector":{"run":"nginx-svc"},"clusterIP":"10.100.116.174","type":"LoadBalancer","sessionAffinity":"None","externalTrafficPolicy":"Cluster"},"status":{"loadBalancer":{"ingress":[{"ip":"10.100.116.174"}]}}} 14:09:25 | ! I0322 14:09:25.973907 17163 loadbalancer_patcher.go:108] Patched nginx-svc with IP 10.100.116.174 14:09:25 | ! I0322 14:09:25.973918 17163 loadbalancer_patcher.go:71] kube-dns is not type LoadBalancer, skipping. 14:09:25 | 14:09:25 | ! I0322 14:09:25.973924 17163 tunnel.go:130] sending report id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) > Status: 14:09:25 | ! I0322 14:09:25.973974 17163 tunnel_manager.go:108] minikube status: id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:25 | > machine: minikube 14:09:25 | ! I0322 14:09:25.973987 17163 tunnel_manager.go:83] sleep for 5s 14:09:25 | > pid: 17163 14:09:25 | > route: 10.96.0.0/12 -> 10.128.0.3 14:09:25 | > minikube: Running 14:09:25 | > services: [nginx-svc] 14:09:25 | > errors: 14:09:25 | > minikube: no errors 14:09:25 | > router: no errors 14:09:25 | > loadbalancer emulator: no errors 14:09:26 | ! I0322 14:09:26.063597 17537 logs.go:152] 1 containers: [e5a9056aa15b] 14:09:26 | ! I0322 14:09:26.063635 17537 exec_runner.go:50] Run with output: docker ps -a --filter="name=k8s_kube-proxy" --format="{{.ID}}" 14:09:26 | ! I0322 14:09:26.154213 17537 logs.go:152] 1 containers: [2d8a98d59f6a] 14:09:26 | > ==> coredns <== 14:09:26 | ! I0322 14:09:26.154356 17537 exec_runner.go:50] Run with output: docker logs --tail 50 44cdb7aac98e 14:09:26 | > .:53 14:09:26 | > 2019-03-22T14:08:15.325Z [INFO] CoreDNS-1.2.6 14:09:26 | > 2019-03-22T14:08:15.326Z [INFO] linux/amd64, go1.11.2, 756749c 14:09:26 | > CoreDNS-1.2.6 14:09:26 | > linux/amd64, go1.11.2, 756749c 14:09:26 | > [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769 14:09:26 | > E0322 14:08:40.327823 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout 14:09:26 | > E0322 14:08:40.327861 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout 14:09:26 | > E0322 14:08:40.328130 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout 14:09:26 | > ==> dmesg <== 14:09:26 | ! I0322 14:09:26.239904 17537 exec_runner.go:50] Run with output: sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 50 14:09:26 | > [Mar22 13:36] kvm [9306]: vcpu0, guest rIP: 0xffffffff8ce46066 unhandled rdmsr: 0x34 14:09:26 | > [ +0.008136] kvm [9306]: vcpu0, guest rIP: 0xffffffff8ce46066 unhandled rdmsr: 0x606 14:09:26 | > [Mar22 13:40] kvm [9610]: vcpu0, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008767] kvm [9610]: vcpu0, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x4e 14:09:26 | > [ +0.122095] kvm [9610]: vcpu1, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008911] kvm [9610]: vcpu1, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x4e 14:09:26 | > [Mar22 13:41] kvm [9610]: vcpu0, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x34 14:09:26 | > [ +0.008010] kvm [9610]: vcpu0, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x606 14:09:26 | > [Mar22 13:42] kvm [9956]: vcpu0, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008712] kvm [9956]: vcpu0, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x4e 14:09:26 | > [ +0.121907] kvm [9956]: vcpu1, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008446] kvm [9956]: vcpu1, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x4e 14:09:26 | > [Mar22 13:43] kvm [9956]: vcpu1, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x34 14:09:26 | > [ +0.008857] kvm [9956]: vcpu1, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x606 14:09:26 | > [Mar22 13:46] kvm [10234]: vcpu0, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008656] kvm [10234]: vcpu0, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x4e 14:09:26 | > [ +0.126628] kvm [10234]: vcpu1, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008718] kvm [10234]: vcpu1, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x4e 14:09:26 | > [Mar22 13:47] kvm [10234]: vcpu1, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x34 14:09:26 | > [ +0.008529] kvm [10234]: vcpu1, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x606 14:09:26 | > [Mar22 13:49] kvm [10564]: vcpu0, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008440] kvm [10564]: vcpu0, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x4e 14:09:26 | > [ +0.120317] kvm [10564]: vcpu1, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008467] kvm [10564]: vcpu1, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x4e 14:09:26 | > [ +45.657788] kvm [10564]: vcpu1, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x34 14:09:26 | > [ +0.008293] kvm [10564]: vcpu1, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x606 14:09:26 | > [Mar22 13:53] kvm [10871]: vcpu0, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.009030] kvm [10871]: vcpu0, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x4e 14:09:26 | > [ +0.120618] kvm [10871]: vcpu1, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008524] kvm [10871]: vcpu1, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x4e 14:09:26 | > [ +38.659017] kvm [10871]: vcpu0, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x34 14:09:26 | > [ +0.008154] kvm [10871]: vcpu0, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x606 14:09:26 | > [Mar22 13:55] kvm [11210]: vcpu0, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008511] kvm [11210]: vcpu0, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x4e 14:09:26 | > [ +0.119601] kvm [11210]: vcpu1, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008344] kvm [11210]: vcpu1, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x4e 14:09:26 | > [Mar22 13:56] kvm [11210]: vcpu1, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x34 14:09:26 | > [ +0.008512] kvm [11210]: vcpu1, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x606 14:09:26 | > [Mar22 14:01] kvm [11534]: vcpu0, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008480] kvm [11534]: vcpu0, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x4e 14:09:26 | > [ +0.119860] kvm [11534]: vcpu1, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008696] kvm [11534]: vcpu1, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x4e 14:09:26 | > [Mar22 14:02] kvm [11534]: vcpu1, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x34 14:09:26 | > [ +0.008466] kvm [11534]: vcpu1, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x606 14:09:26 | > [Mar22 14:03] kvm [11856]: vcpu0, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008488] kvm [11856]: vcpu0, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x4e 14:09:26 | > [ +0.118678] kvm [11856]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x140 14:09:26 | > [ +0.008495] kvm [11856]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x4e 14:09:26 | > [Mar22 14:04] kvm [11856]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x34 14:09:26 | > [ +0.008162] kvm [11856]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x606 14:09:26 | > ==> kernel <== 14:09:26 | ! I0322 14:09:26.254770 17537 exec_runner.go:50] Run with output: uptime && uname -a 14:09:26 | > 14:09:26 up 9:51, 0 users, load average: 1.05, 1.31, 1.44 14:09:26 | > Linux kvm-integration-slave 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux 14:09:26 | > ==> kube-apiserver <== 14:09:26 | ! I0322 14:09:26.260365 17537 exec_runner.go:50] Run with output: docker logs --tail 50 6e967224b5b5 14:09:26 | > I0322 14:08:00.328418 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider 14:09:26 | > I0322 14:08:00.368738 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node 14:09:26 | > I0322 14:08:00.408580 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler 14:09:26 | > I0322 14:08:00.448801 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller 14:09:26 | > I0322 14:08:00.488685 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller 14:09:26 | > I0322 14:08:00.528868 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller 14:09:26 | > I0322 14:08:00.568729 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller 14:09:26 | > I0322 14:08:00.608954 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller 14:09:26 | > I0322 14:08:00.648784 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller 14:09:26 | > I0322 14:08:00.688711 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller 14:09:26 | > I0322 14:08:00.729314 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller 14:09:26 | > I0322 14:08:00.768990 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector 14:09:26 | > I0322 14:08:00.808800 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler 14:09:26 | > I0322 14:08:00.848993 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller 14:09:26 | > I0322 14:08:00.888683 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller 14:09:26 | > I0322 14:08:00.928625 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller 14:09:26 | > I0322 14:08:00.969023 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder 14:09:26 | > I0322 14:08:01.008714 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector 14:09:26 | > I0322 14:08:01.048865 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller 14:09:26 | > I0322 14:08:01.088807 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller 14:09:26 | > I0322 14:08:01.129198 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller 14:09:26 | > I0322 14:08:01.169044 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller 14:09:26 | > I0322 14:08:01.208629 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller 14:09:26 | > I0322 14:08:01.248509 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller 14:09:26 | > I0322 14:08:01.289038 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller 14:09:26 | > I0322 14:08:01.335119 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller 14:09:26 | > I0322 14:08:01.369117 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller 14:09:26 | > I0322 14:08:01.408766 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller 14:09:26 | > I0322 14:08:01.448543 1 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller 14:09:26 | > I0322 14:08:01.487014 1 controller.go:608] quota admission added evaluator for: roles.rbac.authorization.k8s.io 14:09:26 | > I0322 14:08:01.489226 1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system 14:09:26 | > I0322 14:08:01.528760 1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system 14:09:26 | > I0322 14:08:01.568682 1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system 14:09:26 | > I0322 14:08:01.608804 1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system 14:09:26 | > I0322 14:08:01.653737 1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system 14:09:26 | > I0322 14:08:01.689095 1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system 14:09:26 | > I0322 14:08:01.730516 1 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public 14:09:26 | > I0322 14:08:01.766921 1 controller.go:608] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io 14:09:26 | > I0322 14:08:01.769235 1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system 14:09:26 | > I0322 14:08:01.808968 1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system 14:09:26 | > I0322 14:08:01.848973 1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system 14:09:26 | > I0322 14:08:01.889065 1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system 14:09:26 | > I0322 14:08:01.928938 1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system 14:09:26 | > I0322 14:08:01.969037 1 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public 14:09:26 | > I0322 14:08:02.585764 1 controller.go:608] quota admission added evaluator for: serviceaccounts 14:09:26 | > I0322 14:08:03.097692 1 controller.go:608] quota admission added evaluator for: deployments.apps 14:09:26 | > I0322 14:08:03.129260 1 controller.go:608] quota admission added evaluator for: daemonsets.apps 14:09:26 | > I0322 14:08:06.151360 1 controller.go:608] quota admission added evaluator for: namespaces 14:09:26 | > I0322 14:08:08.973394 1 controller.go:608] quota admission added evaluator for: controllerrevisions.apps 14:09:26 | > I0322 14:08:09.026733 1 controller.go:608] quota admission added evaluator for: replicasets.apps 14:09:26 | > ==> kube-proxy <== 14:09:26 | ! I0322 14:09:26.335234 17537 exec_runner.go:50] Run with output: docker logs --tail 50 2d8a98d59f6a 14:09:26 | ! I0322 14:09:26.434141 17537 exec_runner.go:50] Run with output: docker logs --tail 50 e5a9056aa15b 14:09:26 | > W0322 14:08:10.101250 1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy 14:09:26 | > I0322 14:08:10.127071 1 server_others.go:148] Using iptables Proxier. 14:09:26 | > W0322 14:08:10.127277 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic 14:09:26 | > I0322 14:08:10.127605 1 server_others.go:178] Tearing down inactive rules. 14:09:26 | > E0322 14:08:10.363879 1 proxier.go:579] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links. 14:09:26 | > I0322 14:08:10.717371 1 server.go:483] Version: v1.13.4 14:09:26 | > I0322 14:08:10.724391 1 conntrack.go:52] Setting nf_conntrack_max to 131072 14:09:26 | > I0322 14:08:10.724740 1 config.go:202] Starting service config controller 14:09:26 | > I0322 14:08:10.724806 1 controller_utils.go:1027] Waiting for caches to sync for service config controller 14:09:26 | > I0322 14:08:10.724781 1 config.go:102] Starting endpoints config controller 14:09:26 | > I0322 14:08:10.725083 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller 14:09:26 | > I0322 14:08:10.825102 1 controller_utils.go:1034] Caches are synced for service config controller 14:09:26 | > I0322 14:08:10.825433 1 controller_utils.go:1034] Caches are synced for endpoints config controller 14:09:26 | > ==> kube-scheduler <== 14:09:26 | > E0322 14:07:55.300822 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:55.301887 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:55.305477 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:55.307482 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:55.311412 1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:56.285781 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope 14:09:26 | > E0322 14:07:56.297663 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:56.298547 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope 14:09:26 | > E0322 14:07:56.299553 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope 14:09:26 | > E0322 14:07:56.300892 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope 14:09:26 | > E0322 14:07:56.302136 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:56.302984 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:56.306477 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:56.308457 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:56.312709 1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:57.287090 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope 14:09:26 | > E0322 14:07:57.298872 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:57.299808 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope 14:09:26 | > E0322 14:07:57.300914 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope 14:09:26 | > E0322 14:07:57.301717 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope 14:09:26 | > E0322 14:07:57.303102 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:57.304284 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:57.307422 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:57.309597 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:57.314503 1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:58.288547 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope 14:09:26 | > E0322 14:07:58.300114 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:58.300957 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope 14:09:26 | > E0322 14:07:58.302040 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope 14:09:26 | > E0322 14:07:58.303135 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope 14:09:26 | > E0322 14:07:58.304252 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:58.305338 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:58.308276 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:58.310486 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:58.315580 1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:59.290039 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope 14:09:26 | > E0322 14:07:59.301477 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:59.302264 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope 14:09:26 | > E0322 14:07:59.303264 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope 14:09:26 | > E0322 14:07:59.304437 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope 14:09:26 | > E0322 14:07:59.305451 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:59.306458 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:59.309173 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:59.311384 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope 14:09:26 | > E0322 14:07:59.316548 1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope 14:09:26 | > E0322 14:08:00.291612 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope 14:09:26 | > I0322 14:08:02.067951 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller 14:09:26 | > I0322 14:08:02.168193 1 controller_utils.go:1034] Caches are synced for scheduler controller 14:09:26 | > I0322 14:08:02.168278 1 leaderelection.go:205] attempting to acquire leader lease kube-system/kube-scheduler... 14:09:26 | > I0322 14:08:02.175097 1 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler 14:09:26 | > ==> kubelet <== 14:09:26 | ! I0322 14:09:26.516826 17537 exec_runner.go:50] Run with output: journalctl -u kubelet -n 50 14:09:26 | > -- Logs begin at Fri 2019-03-22 11:19:21 UTC, end at Fri 2019-03-22 14:09:26 UTC. -- 14:09:26 | > Mar 22 14:07:51 kvm-integration-slave kubelet[14794]: E0322 14:07:49.923159 14794 eviction_manager.go:243] eviction manager: failed to get summary stats: failed to get node info: node "minikube" not found 14:09:26 | > Mar 22 14:07:51 kvm-integration-slave kubelet[14794]: E0322 14:07:49.969098 14794 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158e4d11b0fb69ea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eaf3c37bea, ext:251569503, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eb040fa001, ext:451253102, loc:(*time.Location)(0x71d6440)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) 14:09:26 | > Mar 22 14:07:51 kvm-integration-slave kubelet[14794]: E0322 14:07:49.981557 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:51 kvm-integration-slave kubelet[14794]: E0322 14:07:50.081728 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:51 kvm-integration-slave kubelet[14794]: E0322 14:07:50.181939 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:51 kvm-integration-slave kubelet[14794]: E0322 14:07:50.282181 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:51 kvm-integration-slave kubelet[14794]: E0322 14:07:50.369472 14794 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158e4d11b0fb8dd9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eaf3c39fd9, ext:251578693, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eb040fd713, ext:451267205, loc:(*time.Location)(0x71d6440)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) 14:09:26 | > Mar 22 14:07:51 kvm-integration-slave kubelet[14794]: E0322 14:07:50.382399 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:51 kvm-integration-slave kubelet[14794]: E0322 14:07:50.482574 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:51 kvm-integration-slave kubelet[14794]: E0322 14:07:50.582769 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:51 kvm-integration-slave kubelet[14794]: E0322 14:07:50.682992 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:50.769959 14794 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158e4d11b0fba2c0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eaf3c3b4c0, ext:251584043, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eb041010f7, ext:451282022, loc:(*time.Location)(0x71d6440)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:50.783186 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:50.883515 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:50.983726 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:51.083927 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:51.169396 14794 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158e4d11b0fb8dd9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eaf3c39fd9, ext:251578693, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eb0416fa1b, ext:451734919, loc:(*time.Location)(0x71d6440)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:51.184105 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:51.284299 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:51.384526 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:51.484773 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:51.569291 14794 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158e4d11b0fb69ea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eaf3c37bea, ext:251569503, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eb0416cd28, ext:451723428, loc:(*time.Location)(0x71d6440)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:51.584985 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:51.685218 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:51.785539 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:52 kvm-integration-slave kubelet[14794]: E0322 14:07:51.885724 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:53 kvm-integration-slave kubelet[14794]: E0322 14:07:51.969502 14794 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.158e4d11b0fba2c0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eaf3c3b4c0, ext:251584043, loc:(*time.Location)(0x71d6440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf1d58eb04170f02, ext:451740273, loc:(*time.Location)(0x71d6440)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) 14:09:26 | > Mar 22 14:07:53 kvm-integration-slave kubelet[14794]: E0322 14:07:51.985918 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:53 kvm-integration-slave kubelet[14794]: E0322 14:07:52.086125 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:53 kvm-integration-slave kubelet[14794]: E0322 14:07:52.186317 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:53 kvm-integration-slave kubelet[14794]: E0322 14:07:52.286504 14794 kubelet.go:2266] node "minikube" not found 14:09:26 | > Mar 22 14:07:53 kvm-integration-slave kubelet[14794]: I0322 14:07:52.304117 14794 kubelet_node_status.go:75] Successfully registered node minikube 14:09:26 | > Mar 22 14:08:09 kvm-integration-slave kubelet[14794]: I0322 14:08:09.028634 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ebbfd811-4cab-11e9-aacb-42010a800003-kube-proxy") pod "kube-proxy-4ncwv" (UID: "ebbfd811-4cab-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:08:09 kvm-integration-slave kubelet[14794]: I0322 14:08:09.028732 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/ebbfd811-4cab-11e9-aacb-42010a800003-lib-modules") pod "kube-proxy-4ncwv" (UID: "ebbfd811-4cab-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:08:09 kvm-integration-slave kubelet[14794]: I0322 14:08:09.028759 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/ebbfd811-4cab-11e9-aacb-42010a800003-xtables-lock") pod "kube-proxy-4ncwv" (UID: "ebbfd811-4cab-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:08:09 kvm-integration-slave kubelet[14794]: I0322 14:08:09.028784 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-cxmjr" (UniqueName: "kubernetes.io/secret/ebbfd811-4cab-11e9-aacb-42010a800003-kube-proxy-token-cxmjr") pod "kube-proxy-4ncwv" (UID: "ebbfd811-4cab-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:08:09 kvm-integration-slave kubelet[14794]: I0322 14:08:09.129583 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-v5ktg" (UniqueName: "kubernetes.io/secret/ebc7b0a7-4cab-11e9-aacb-42010a800003-coredns-token-v5ktg") pod "coredns-86c58d9df4-h25vx" (UID: "ebc7b0a7-4cab-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:08:09 kvm-integration-slave kubelet[14794]: I0322 14:08:09.129776 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ebc8b4f4-4cab-11e9-aacb-42010a800003-config-volume") pod "coredns-86c58d9df4-88rxq" (UID: "ebc8b4f4-4cab-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:08:09 kvm-integration-slave kubelet[14794]: I0322 14:08:09.129910 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-v5ktg" (UniqueName: "kubernetes.io/secret/ebc8b4f4-4cab-11e9-aacb-42010a800003-coredns-token-v5ktg") pod "coredns-86c58d9df4-88rxq" (UID: "ebc8b4f4-4cab-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:08:09 kvm-integration-slave kubelet[14794]: I0322 14:08:09.129990 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ebc7b0a7-4cab-11e9-aacb-42010a800003-config-volume") pod "coredns-86c58d9df4-h25vx" (UID: "ebc7b0a7-4cab-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:08:10 kvm-integration-slave kubelet[14794]: W0322 14:08:10.329814 14794 pod_container_deletor.go:75] Container "24e6036d2ba2ed041afd588a25786c722a2375fef645455c6ce90d90013e6704" not found in pod's containers 14:09:26 | > Mar 22 14:08:10 kvm-integration-slave kubelet[14794]: I0322 14:08:10.835402 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/eccf1283-4cab-11e9-aacb-42010a800003-tmp") pod "storage-provisioner" (UID: "eccf1283-4cab-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:08:10 kvm-integration-slave kubelet[14794]: I0322 14:08:10.835490 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-s7brt" (UniqueName: "kubernetes.io/secret/eccf1283-4cab-11e9-aacb-42010a800003-storage-provisioner-token-s7brt") pod "storage-provisioner" (UID: "eccf1283-4cab-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:08:11 kvm-integration-slave kubelet[14794]: E0322 14:08:11.349466 14794 remote_runtime.go:282] ContainerStatus "96f7c7e4e6e801db29717797d0964a9beb99ede613ccbc6f2e55afbfd5a42269" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 96f7c7e4e6e801db29717797d0964a9beb99ede613ccbc6f2e55afbfd5a42269 14:09:26 | > Mar 22 14:08:11 kvm-integration-slave kubelet[14794]: E0322 14:08:11.349517 14794 kuberuntime_container.go:397] ContainerStatus for 96f7c7e4e6e801db29717797d0964a9beb99ede613ccbc6f2e55afbfd5a42269 error: rpc error: code = Unknown desc = Error: No such container: 96f7c7e4e6e801db29717797d0964a9beb99ede613ccbc6f2e55afbfd5a42269 14:09:26 | > Mar 22 14:08:11 kvm-integration-slave kubelet[14794]: E0322 14:08:11.349528 14794 kuberuntime_manager.go:871] getPodContainerStatuses for pod "storage-provisioner_kube-system(eccf1283-4cab-11e9-aacb-42010a800003)" failed: rpc error: code = Unknown desc = Error: No such container: 96f7c7e4e6e801db29717797d0964a9beb99ede613ccbc6f2e55afbfd5a42269 14:09:26 | > Mar 22 14:08:11 kvm-integration-slave kubelet[14794]: E0322 14:08:11.349552 14794 generic.go:247] PLEG: Ignoring events for pod storage-provisioner/kube-system: rpc error: code = Unknown desc = Error: No such container: 96f7c7e4e6e801db29717797d0964a9beb99ede613ccbc6f2e55afbfd5a42269 14:09:26 | > Mar 22 14:09:21 kvm-integration-slave kubelet[14794]: I0322 14:09:21.067782 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-kxp65" (UniqueName: "kubernetes.io/secret/16adef8f-4cac-11e9-aacb-42010a800003-default-token-kxp65") pod "nginx-svc" (UID: "16adef8f-4cac-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:09:25 kvm-integration-slave kubelet[14794]: I0322 14:09:25.688557 14794 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-kxp65" (UniqueName: "kubernetes.io/secret/196ab782-4cac-11e9-aacb-42010a800003-default-token-kxp65") pod "busybox-kl2d6" (UID: "196ab782-4cac-11e9-aacb-42010a800003") 14:09:26 | > Mar 22 14:09:26 kvm-integration-slave kubelet[14794]: W0322 14:09:26.400542 14794 pod_container_deletor.go:75] Container "bb6cf2454fcfa421aee7936cff0ab926250c99e50d4390cb0f08515673ef55ad" not found in pod's containers === CONT TestFunctional/ServicesList 14:09:26 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 service list] 14:09:26 | > |-------------|------------|-------------------------| 14:09:26 | > | NAMESPACE | NAME | URL | 14:09:26 | > |-------------|------------|-------------------------| 14:09:26 | > | default | kubernetes | No node port | 14:09:26 | > | default | nginx-svc | http://10.128.0.3:31069 | 14:09:26 | > | kube-system | kube-dns | No node port | 14:09:26 | > |-------------|------------|-------------------------| 14:09:30 | ! I0322 14:09:30.974112 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:09:30 | ! I0322 14:09:30.974144 17163 tunnel_manager.go:100] check received 14:09:30 | ! I0322 14:09:30.974158 17163 tunnel.go:119] updating tunnel status... 14:09:30 | ! I0322 14:09:30.974514 17163 none.go:231] checking for running kubelet ... 14:09:30 | ! I0322 14:09:30.974530 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:09:30 | ! I0322 14:09:30.980921 17163 tunnel.go:124] minikube is running, trying to add route10.96.0.0/12 -> 10.128.0.3 14:09:30 | ! I0322 14:09:30.982166 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:09:30 | ! I0322 14:09:30.982405 17163 round_trippers.go:383] GET https://10.128.0.3:8443/api/v1/services?timeout=1s 14:09:30 | ! I0322 14:09:30.982418 17163 round_trippers.go:390] Request Headers: 14:09:30 | ! I0322 14:09:30.982424 17163 round_trippers.go:393] Accept: application/json, */* 14:09:30 | ! I0322 14:09:30.982430 17163 round_trippers.go:393] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format 14:09:30 | ! I0322 14:09:30.984991 17163 round_trippers.go:408] Response Status: 200 OK in 2 milliseconds 14:09:30 | ! I0322 14:09:30.985008 17163 round_trippers.go:411] Response Headers: 14:09:30 | ! I0322 14:09:30.985014 17163 round_trippers.go:414] Content-Type: application/json 14:09:30 | ! I0322 14:09:30.985019 17163 round_trippers.go:414] Content-Length: 2208 14:09:30 | ! I0322 14:09:30.985024 17163 round_trippers.go:414] Date: Fri, 22 Mar 2019 14:09:30 GMT 14:09:30 | ! I0322 14:09:30.985077 17163 request.go:897] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"521"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"e64748b6-4cab-11e9-aacb-42010a800003","resourceVersion":"42","creationTimestamp":"2019-03-22T14:07:59Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"name":"https","protocol":"TCP","port":443,"targetPort":8443}],"clusterIP":"10.96.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"nginx-svc","namespace":"default","selfLink":"/api/v1/namespaces/default/services/nginx-svc","uid":"16b0d6f1-4cac-11e9-aacb-42010a800003","resourceVersion":"495","creationTimestamp":"2019-03-22T14:09:21Z","labels":{"run":"nginx-svc"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\": [truncated 1184 chars] 14:09:30 | ! I0322 14:09:30.985320 17163 loadbalancer_patcher.go:71] kubernetes is not type LoadBalancer, skipping. 14:09:30 | ! I0322 14:09:30.985335 17163 loadbalancer_patcher.go:74] nginx-svc is type LoadBalancer. 14:09:30 | ! I0322 14:09:30.985341 17163 loadbalancer_patcher.go:71] kube-dns is not type LoadBalancer, skipping. 14:09:30 | ! I0322 14:09:30.985347 17163 tunnel.go:130] sending report id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:30 | ! I0322 14:09:30.985392 17163 tunnel_manager.go:108] minikube status: id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:30 | ! I0322 14:09:30.985413 17163 tunnel_manager.go:83] sleep for 5s 14:09:30 | > Status: 14:09:30 | > machine: minikube 14:09:30 | > pid: 17163 14:09:30 | > route: 10.96.0.0/12 -> 10.128.0.3 14:09:30 | > minikube: Running 14:09:30 | > services: [nginx-svc] 14:09:30 | > errors: 14:09:30 | > minikube: no errors 14:09:30 | > router: no errors 14:09:30 | > loadbalancer emulator: no errors 14:09:35 | ! I0322 14:09:35.985666 17163 tunnel_manager.go:100] check received 14:09:35 | ! I0322 14:09:35.985705 17163 tunnel.go:119] updating tunnel status... 14:09:35 | ! I0322 14:09:35.985671 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:09:35 | ! I0322 14:09:35.986076 17163 none.go:231] checking for running kubelet ... 14:09:35 | ! I0322 14:09:35.986090 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:09:35 | ! I0322 14:09:35.992189 17163 tunnel.go:124] minikube is running, trying to add route10.96.0.0/12 -> 10.128.0.3 14:09:35 | ! I0322 14:09:35.993269 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:09:35 | ! I0322 14:09:35.993481 17163 round_trippers.go:383] GET https://10.128.0.3:8443/api/v1/services?timeout=1s 14:09:35 | ! I0322 14:09:35.993496 17163 round_trippers.go:390] Request Headers: 14:09:35 | ! I0322 14:09:35.993502 17163 round_trippers.go:393] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format 14:09:35 | ! I0322 14:09:35.993508 17163 round_trippers.go:393] Accept: application/json, */* 14:09:35 | ! I0322 14:09:35.995992 17163 round_trippers.go:408] Response Status: 200 OK in 2 milliseconds 14:09:35 | ! I0322 14:09:35.996008 17163 round_trippers.go:411] Response Headers: 14:09:35 | ! I0322 14:09:35.996013 17163 round_trippers.go:414] Content-Type: application/json 14:09:35 | ! I0322 14:09:35.996018 17163 round_trippers.go:414] Content-Length: 2208 14:09:35 | ! I0322 14:09:35.996023 17163 round_trippers.go:414] Date: Fri, 22 Mar 2019 14:09:35 GMT 14:09:35 | ! I0322 14:09:35.996071 17163 request.go:897] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"526"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"e64748b6-4cab-11e9-aacb-42010a800003","resourceVersion":"42","creationTimestamp":"2019-03-22T14:07:59Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"name":"https","protocol":"TCP","port":443,"targetPort":8443}],"clusterIP":"10.96.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"nginx-svc","namespace":"default","selfLink":"/api/v1/namespaces/default/services/nginx-svc","uid":"16b0d6f1-4cac-11e9-aacb-42010a800003","resourceVersion":"495","creationTimestamp":"2019-03-22T14:09:21Z","labels":{"run":"nginx-svc"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\": [truncated 1184 chars] 14:09:35 | > Status: 14:09:35 | > machine: minikube 14:09:35 | ! I0322 14:09:35.996278 17163 loadbalancer_patcher.go:71] kubernetes is not type LoadBalancer, skipping. 14:09:35 | ! I0322 14:09:35.996290 17163 loadbalancer_patcher.go:74] nginx-svc is type LoadBalancer. 14:09:35 | ! I0322 14:09:35.996296 17163 loadbalancer_patcher.go:71] kube-dns is not type LoadBalancer, skipping. 14:09:35 | ! I0322 14:09:35.996302 17163 tunnel.go:130] sending report id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:35 | > pid: 17163 14:09:35 | ! I0322 14:09:35.996351 17163 tunnel_manager.go:108] minikube status: id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:35 | > route: 10.96.0.0/12 -> 10.128.0.3 14:09:35 | ! I0322 14:09:35.996374 17163 tunnel_manager.go:83] sleep for 5s 14:09:35 | > minikube: Running 14:09:35 | > services: [nginx-svc] 14:09:35 | > errors: 14:09:35 | > minikube: no errors 14:09:35 | > router: no errors 14:09:35 | > loadbalancer emulator: no errors 14:09:40 | ! I0322 14:09:40.996513 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:09:40 | ! I0322 14:09:40.996559 17163 tunnel_manager.go:100] check received 14:09:40 | ! I0322 14:09:40.996566 17163 tunnel.go:119] updating tunnel status... 14:09:40 | ! I0322 14:09:40.996908 17163 none.go:231] checking for running kubelet ... 14:09:40 | ! I0322 14:09:40.996929 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:09:41 | ! I0322 14:09:41.002833 17163 tunnel.go:124] minikube is running, trying to add route10.96.0.0/12 -> 10.128.0.3 14:09:41 | ! I0322 14:09:41.003974 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:09:41 | ! I0322 14:09:41.004212 17163 round_trippers.go:383] GET https://10.128.0.3:8443/api/v1/services?timeout=1s 14:09:41 | ! I0322 14:09:41.004225 17163 round_trippers.go:390] Request Headers: 14:09:41 | ! I0322 14:09:41.004232 17163 round_trippers.go:393] Accept: application/json, */* 14:09:41 | ! I0322 14:09:41.004237 17163 round_trippers.go:393] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format 14:09:41 | ! I0322 14:09:41.006702 17163 round_trippers.go:408] Response Status: 200 OK in 2 milliseconds 14:09:41 | ! I0322 14:09:41.006720 17163 round_trippers.go:411] Response Headers: 14:09:41 | ! I0322 14:09:41.006726 17163 round_trippers.go:414] Content-Type: application/json 14:09:41 | ! I0322 14:09:41.006731 17163 round_trippers.go:414] Content-Length: 2208 14:09:41 | ! I0322 14:09:41.006736 17163 round_trippers.go:414] Date: Fri, 22 Mar 2019 14:09:41 GMT 14:09:41 | ! I0322 14:09:41.006800 17163 request.go:897] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"533"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"e64748b6-4cab-11e9-aacb-42010a800003","resourceVersion":"42","creationTimestamp":"2019-03-22T14:07:59Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"name":"https","protocol":"TCP","port":443,"targetPort":8443}],"clusterIP":"10.96.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"nginx-svc","namespace":"default","selfLink":"/api/v1/namespaces/default/services/nginx-svc","uid":"16b0d6f1-4cac-11e9-aacb-42010a800003","resourceVersion":"495","creationTimestamp":"2019-03-22T14:09:21Z","labels":{"run":"nginx-svc"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\": [truncated 1184 chars] 14:09:41 | ! I0322 14:09:41.007003 17163 loadbalancer_patcher.go:71] kubernetes is not type LoadBalancer, skipping. 14:09:41 | ! I0322 14:09:41.007019 17163 loadbalancer_patcher.go:74] nginx-svc is type LoadBalancer. 14:09:41 | ! I0322 14:09:41.007025 17163 loadbalancer_patcher.go:71] kube-dns is not type LoadBalancer, skipping. 14:09:41 | ! I0322 14:09:41.007032 17163 tunnel.go:130] sending report id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:41 | ! I0322 14:09:41.007074 17163 tunnel_manager.go:108] minikube status: id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:41 | ! I0322 14:09:41.007096 17163 tunnel_manager.go:83] sleep for 5s 14:09:41 | > Status: 14:09:41 | > machine: minikube 14:09:41 | > pid: 17163 14:09:41 | > route: 10.96.0.0/12 -> 10.128.0.3 14:09:41 | > minikube: Running 14:09:41 | > services: [nginx-svc] 14:09:41 | > errors: 14:09:41 | > minikube: no errors 14:09:41 | > router: no errors 14:09:41 | > loadbalancer emulator: no errors 14:09:46 | ! I0322 14:09:46.007241 17163 tunnel_manager.go:100] check received 14:09:46 | ! I0322 14:09:46.007282 17163 tunnel.go:119] updating tunnel status... 14:09:46 | ! I0322 14:09:46.007309 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:09:46 | ! I0322 14:09:46.007665 17163 none.go:231] checking for running kubelet ... 14:09:46 | ! I0322 14:09:46.007685 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:09:46 | ! I0322 14:09:46.014334 17163 tunnel.go:124] minikube is running, trying to add route10.96.0.0/12 -> 10.128.0.3 14:09:46 | ! I0322 14:09:46.015577 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:09:46 | ! I0322 14:09:46.015880 17163 round_trippers.go:383] GET https://10.128.0.3:8443/api/v1/services?timeout=1s 14:09:46 | ! I0322 14:09:46.015895 17163 round_trippers.go:390] Request Headers: 14:09:46 | ! I0322 14:09:46.015901 17163 round_trippers.go:393] Accept: application/json, */* 14:09:46 | ! I0322 14:09:46.015907 17163 round_trippers.go:393] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format 14:09:46 | > Status: 14:09:46 | ! I0322 14:09:46.018816 17163 round_trippers.go:408] Response Status: 200 OK in 2 milliseconds 14:09:46 | ! I0322 14:09:46.018831 17163 round_trippers.go:411] Response Headers: 14:09:46 | ! I0322 14:09:46.018837 17163 round_trippers.go:414] Content-Type: application/json 14:09:46 | ! I0322 14:09:46.018842 17163 round_trippers.go:414] Content-Length: 2208 14:09:46 | ! I0322 14:09:46.018847 17163 round_trippers.go:414] Date: Fri, 22 Mar 2019 14:09:46 GMT 14:09:46 | > machine: minikube 14:09:46 | > pid: 17163 14:09:46 | > route: 10.96.0.0/12 -> 10.128.0.3 14:09:46 | > minikube: Running 14:09:46 | > services: [nginx-svc] 14:09:46 | > errors: 14:09:46 | > minikube: no errors 14:09:46 | 14:09:46 | > router: no errors 14:09:46 | > loadbalancer emulator: no errors ! I0322 14:09:46.018894 17163 request.go:897] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"538"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"e64748b6-4cab-11e9-aacb-42010a800003","resourceVersion":"42","creationTimestamp":"2019-03-22T14:07:59Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"name":"https","protocol":"TCP","port":443,"targetPort":8443}],"clusterIP":"10.96.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"nginx-svc","namespace":"default","selfLink":"/api/v1/namespaces/default/services/nginx-svc","uid":"16b0d6f1-4cac-11e9-aacb-42010a800003","resourceVersion":"495","creationTimestamp":"2019-03-22T14:09:21Z","labels":{"run":"nginx-svc"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\": [truncated 1184 chars] 14:09:46 | ! I0322 14:09:46.019052 17163 loadbalancer_patcher.go:71] kubernetes is not type LoadBalancer, skipping. 14:09:46 | ! I0322 14:09:46.019061 17163 loadbalancer_patcher.go:74] nginx-svc is type LoadBalancer. 14:09:46 | ! I0322 14:09:46.019066 17163 loadbalancer_patcher.go:71] kube-dns is not type LoadBalancer, skipping. 14:09:46 | ! I0322 14:09:46.019072 17163 tunnel.go:130] sending report id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:46 | ! I0322 14:09:46.019112 17163 tunnel_manager.go:108] minikube status: id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:46 | ! I0322 14:09:46.019164 17163 tunnel_manager.go:83] sleep for 5s 14:09:51 | ! I0322 14:09:51.019348 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:09:51 | ! I0322 14:09:51.019405 17163 tunnel_manager.go:100] check received 14:09:51 | ! I0322 14:09:51.019415 17163 tunnel.go:119] updating tunnel status... 14:09:51 | ! I0322 14:09:51.019813 17163 none.go:231] checking for running kubelet ... 14:09:51 | ! I0322 14:09:51.019831 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:09:51 | ! I0322 14:09:51.026455 17163 tunnel.go:124] minikube is running, trying to add route10.96.0.0/12 -> 10.128.0.3 14:09:51 | ! I0322 14:09:51.027616 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:09:51 | ! I0322 14:09:51.027844 17163 round_trippers.go:383] GET https://10.128.0.3:8443/api/v1/services?timeout=1s 14:09:51 | ! I0322 14:09:51.027863 17163 round_trippers.go:390] Request Headers: 14:09:51 | ! I0322 14:09:51.027867 17163 round_trippers.go:393] Accept: application/json, */* 14:09:51 | ! I0322 14:09:51.027871 17163 round_trippers.go:393] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format 14:09:51 | ! I0322 14:09:51.030968 17163 round_trippers.go:408] Response Status: 200 OK in 3 milliseconds 14:09:51 | ! I0322 14:09:51.030983 17163 round_trippers.go:411] Response Headers: 14:09:51 | ! I0322 14:09:51.030987 17163 round_trippers.go:414] Content-Type: application/json 14:09:51 | ! I0322 14:09:51.030990 17163 round_trippers.go:414] Content-Length: 2208 14:09:51 | ! I0322 14:09:51.030993 17163 round_trippers.go:414] Date: Fri, 22 Mar 2019 14:09:51 GMT 14:09:51 | ! I0322 14:09:51.031045 17163 request.go:897] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"545"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"e64748b6-4cab-11e9-aacb-42010a800003","resourceVersion":"42","creationTimestamp":"2019-03-22T14:07:59Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"name":"https","protocol":"TCP","port":443,"targetPort":8443}],"clusterIP":"10.96.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"nginx-svc","namespace":"default","selfLink":"/api/v1/namespaces/default/services/nginx-svc","uid":"16b0d6f1-4cac-11e9-aacb-42010a800003","resourceVersion":"495","creationTimestamp":"2019-03-22T14:09:21Z","labels":{"run":"nginx-svc"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\": [truncated 1184 chars] 14:09:51 | ! I0322 14:09:51.031208 17163 loadbalancer_patcher.go:71] kubernetes is not type LoadBalancer, skipping. 14:09:51 | ! I0322 14:09:51.031221 17163 loadbalancer_patcher.go:74] nginx-svc is type LoadBalancer. 14:09:51 | ! I0322 14:09:51.031225 17163 loadbalancer_patcher.go:71] kube-dns is not type LoadBalancer, skipping. 14:09:51 | ! I0322 14:09:51.031230 17163 tunnel.go:130] sending report id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:51 | ! I0322 14:09:51.031266 17163 tunnel_manager.go:108] minikube status: id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:51 | ! I0322 14:09:51.031285 17163 tunnel_manager.go:83] sleep for 5s 14:09:51 | > Status: 14:09:51 | > machine: minikube 14:09:51 | > pid: 17163 14:09:51 | > route: 10.96.0.0/12 -> 10.128.0.3 14:09:51 | > minikube: Running 14:09:51 | > services: [nginx-svc] 14:09:51 | > errors: 14:09:51 | > minikube: no errors 14:09:51 | > router: no errors 14:09:51 | > loadbalancer emulator: no errors 14:09:56 | ! I0322 14:09:56.031424 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:09:56 | ! I0322 14:09:56.031470 17163 tunnel_manager.go:100] check received 14:09:56 | ! I0322 14:09:56.031478 17163 tunnel.go:119] updating tunnel status... 14:09:56 | ! I0322 14:09:56.031768 17163 none.go:231] checking for running kubelet ... 14:09:56 | ! I0322 14:09:56.031789 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:09:56 | ! I0322 14:09:56.038027 17163 tunnel.go:124] minikube is running, trying to add route10.96.0.0/12 -> 10.128.0.3 14:09:56 | ! I0322 14:09:56.039418 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:09:56 | ! I0322 14:09:56.039713 17163 round_trippers.go:383] GET https://10.128.0.3:8443/api/v1/services?timeout=1s 14:09:56 | ! I0322 14:09:56.039726 17163 round_trippers.go:390] Request Headers: 14:09:56 | ! I0322 14:09:56.039731 17163 round_trippers.go:393] Accept: application/json, */* 14:09:56 | ! I0322 14:09:56.039735 17163 round_trippers.go:393] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format 14:09:56 | ! I0322 14:09:56.042473 17163 round_trippers.go:408] Response Status: 200 OK in 2 milliseconds 14:09:56 | ! I0322 14:09:56.042490 17163 round_trippers.go:411] Response Headers: 14:09:56 | ! I0322 14:09:56.042495 17163 round_trippers.go:414] Content-Type: application/json 14:09:56 | ! I0322 14:09:56.042498 17163 round_trippers.go:414] Content-Length: 2208 14:09:56 | ! I0322 14:09:56.042501 17163 round_trippers.go:414] Date: Fri, 22 Mar 2019 14:09:56 GMT 14:09:56 | ! I0322 14:09:56.042564 17163 request.go:897] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"550"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"e64748b6-4cab-11e9-aacb-42010a800003","resourceVersion":"42","creationTimestamp":"2019-03-22T14:07:59Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"name":"https","protocol":"TCP","port":443,"targetPort":8443}],"clusterIP":"10.96.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"nginx-svc","namespace":"default","selfLink":"/api/v1/namespaces/default/services/nginx-svc","uid":"16b0d6f1-4cac-11e9-aacb-42010a800003","resourceVersion":"495","creationTimestamp":"2019-03-22T14:09:21Z","labels":{"run":"nginx-svc"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\": [truncated 1184 chars] 14:09:56 | ! I0322 14:09:56.042756 17163 loadbalancer_patcher.go:71] kubernetes is not type LoadBalancer, skipping. 14:09:56 | ! I0322 14:09:56.042768 17163 loadbalancer_patcher.go:74] nginx-svc is type LoadBalancer. 14:09:56 | ! I0322 14:09:56.042773 17163 loadbalancer_patcher.go:71] kube-dns is not type LoadBalancer, skipping. 14:09:56 | > Status: 14:09:56 | > machine: minikube 14:09:56 | 14:09:56 | ! I0322 14:09:56.042780 17163 tunnel.go:130] sending report id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:56 | ! I0322 14:09:56.042825 17163 tunnel_manager.go:108] minikube status: id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:09:56 | ! I0322 14:09:56.042838 17163 tunnel_manager.go:83] sleep for 5s > pid: 17163 14:09:56 | > route: 10.96.0.0/12 -> 10.128.0.3 14:09:56 | > minikube: Running 14:09:56 | > services: [nginx-svc] 14:09:56 | > errors: 14:09:56 | > minikube: no errors 14:09:56 | > router: no errors 14:09:56 | > loadbalancer emulator: no errors 14:10:01 | ! I0322 14:10:01.043012 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:10:01 | ! I0322 14:10:01.043059 17163 tunnel_manager.go:100] check received 14:10:01 | ! I0322 14:10:01.043068 17163 tunnel.go:119] updating tunnel status... 14:10:01 | ! I0322 14:10:01.043437 17163 none.go:231] checking for running kubelet ... 14:10:01 | ! I0322 14:10:01.043461 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:10:01 | ! I0322 14:10:01.049843 17163 tunnel.go:124] minikube is running, trying to add route10.96.0.0/12 -> 10.128.0.3 14:10:01 | ! I0322 14:10:01.051423 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:10:01 | ! I0322 14:10:01.051747 17163 round_trippers.go:383] GET https://10.128.0.3:8443/api/v1/services?timeout=1s 14:10:01 | ! I0322 14:10:01.051764 17163 round_trippers.go:390] Request Headers: 14:10:01 | ! I0322 14:10:01.051770 17163 round_trippers.go:393] Accept: application/json, */* 14:10:01 | ! I0322 14:10:01.051776 17163 round_trippers.go:393] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format 14:10:01 | ! I0322 14:10:01.055287 17163 round_trippers.go:408] Response Status: 200 OK in 3 milliseconds 14:10:01 | ! I0322 14:10:01.055308 17163 round_trippers.go:411] Response Headers: 14:10:01 | ! I0322 14:10:01.055314 17163 round_trippers.go:414] Content-Type: application/json 14:10:01 | ! I0322 14:10:01.055318 17163 round_trippers.go:414] Content-Length: 2208 14:10:01 | ! I0322 14:10:01.055323 17163 round_trippers.go:414] Date: Fri, 22 Mar 2019 14:10:01 GMT 14:10:01 | ! I0322 14:10:01.055404 17163 request.go:897] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"559"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"e64748b6-4cab-11e9-aacb-42010a800003","resourceVersion":"42","creationTimestamp":"2019-03-22T14:07:59Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"name":"https","protocol":"TCP","port":443,"targetPort":8443}],"clusterIP":"10.96.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"nginx-svc","namespace":"default","selfLink":"/api/v1/namespaces/default/services/nginx-svc","uid":"16b0d6f1-4cac-11e9-aacb-42010a800003","resourceVersion":"495","creationTimestamp":"2019-03-22T14:09:21Z","labels":{"run":"nginx-svc"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\": [truncated 1184 chars] 14:10:01 | ! I0322 14:10:01.055670 17163 loadbalancer_patcher.go:71] kubernetes is not type LoadBalancer, skipping. 14:10:01 | ! I0322 14:10:01.055684 17163 loadbalancer_patcher.go:74] nginx-svc is type LoadBalancer. 14:10:01 | ! I0322 14:10:01.055691 17163 loadbalancer_patcher.go:71] kube-dns is not type LoadBalancer, skipping. 14:10:01 | ! I0322 14:10:01.055698 17163 tunnel.go:130] sending report id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:10:01 | ! I0322 14:10:01.055758 17163 tunnel_manager.go:108] minikube status: id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:10:01 | ! I0322 14:10:01.055784 17163 tunnel_manager.go:83] sleep for 5s 14:10:01 | > Status: 14:10:01 | > machine: minikube 14:10:01 | > pid: 17163 14:10:01 | > route: 10.96.0.0/12 -> 10.128.0.3 14:10:01 | > minikube: Running 14:10:01 | > services: [nginx-svc] 14:10:01 | > errors: 14:10:01 | > minikube: no errors 14:10:01 | > router: no errors 14:10:01 | > loadbalancer emulator: no errors 14:10:06 | ! I0322 14:10:06.056109 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:10:06 | ! I0322 14:10:06.056159 17163 tunnel_manager.go:100] check received 14:10:06 | ! I0322 14:10:06.056168 17163 tunnel.go:119] updating tunnel status... 14:10:06 | ! I0322 14:10:06.056506 17163 none.go:231] checking for running kubelet ... 14:10:06 | ! I0322 14:10:06.056529 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:10:06 | ! I0322 14:10:06.063084 17163 tunnel.go:124] minikube is running, trying to add route10.96.0.0/12 -> 10.128.0.3 14:10:06 | ! I0322 14:10:06.064483 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:10:06 | ! I0322 14:10:06.064781 17163 round_trippers.go:383] GET https://10.128.0.3:8443/api/v1/services?timeout=1s 14:10:06 | ! I0322 14:10:06.064793 17163 round_trippers.go:390] Request Headers: 14:10:06 | ! I0322 14:10:06.064797 17163 round_trippers.go:393] Accept: application/json, */* 14:10:06 | ! I0322 14:10:06.064801 17163 round_trippers.go:393] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format 14:10:06 | ! I0322 14:10:06.067688 17163 round_trippers.go:408] Response Status: 200 OK in 2 milliseconds 14:10:06 | ! I0322 14:10:06.067710 17163 round_trippers.go:411] Response Headers: 14:10:06 | ! I0322 14:10:06.067716 17163 round_trippers.go:414] Content-Type: application/json 14:10:06 | ! I0322 14:10:06.067720 17163 round_trippers.go:414] Content-Length: 2208 14:10:06 | ! I0322 14:10:06.067725 17163 round_trippers.go:414] Date: Fri, 22 Mar 2019 14:10:06 GMT 14:10:06 | ! I0322 14:10:06.067972 17163 request.go:897] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"564"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"e64748b6-4cab-11e9-aacb-42010a800003","resourceVersion":"42","creationTimestamp":"2019-03-22T14:07:59Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"name":"https","protocol":"TCP","port":443,"targetPort":8443}],"clusterIP":"10.96.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"nginx-svc","namespace":"default","selfLink":"/api/v1/namespaces/default/services/nginx-svc","uid":"16b0d6f1-4cac-11e9-aacb-42010a800003","resourceVersion":"495","creationTimestamp":"2019-03-22T14:09:21Z","labels":{"run":"nginx-svc"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\": [truncated 1184 chars] 14:10:06 | ! I0322 14:10:06.068196 17163 loadbalancer_patcher.go:71] kubernetes is not type LoadBalancer, skipping. 14:10:06 | ! I0322 14:10:06.068211 17163 loadbalancer_patcher.go:74] nginx-svc is type LoadBalancer. 14:10:06 | ! I0322 14:10:06.068217 17163 loadbalancer_patcher.go:71] kube-dns is not type LoadBalancer, skipping. 14:10:06 | ! I0322 14:10:06.068222 17163 tunnel.go:130] sending report id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:10:06 | > Status: 14:10:06 | > machine: minikube 14:10:06 | > pid: 17163 14:10:06 | > route: 10.96.0.0/12 -> 10.128.0.3 14:10:06 | > minikube: Running 14:10:06 | > services: [nginx-svc] 14:10:06 | > errors: 14:10:06 | > minikube: no errors 14:10:06 | > router: no errors 14:10:06 | > loadbalancer emulator: no errors 14:10:06 | ! I0322 14:10:06.068277 17163 tunnel_manager.go:108] minikube status: id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:10:06 | ! I0322 14:10:06.068293 17163 tunnel_manager.go:83] sleep for 5s 14:10:10 | ! - Launching proxy ... 14:10:10 | ! - Verifying proxy health ... 14:10:11 | ! I0322 14:10:11.068577 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:10:11 | ! I0322 14:10:11.068616 17163 tunnel_manager.go:100] check received 14:10:11 | ! I0322 14:10:11.068623 17163 tunnel.go:119] updating tunnel status... 14:10:11 | ! I0322 14:10:11.068956 17163 none.go:231] checking for running kubelet ... 14:10:11 | ! I0322 14:10:11.068969 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:10:11 | ! I0322 14:10:11.075770 17163 tunnel.go:124] minikube is running, trying to add route10.96.0.0/12 -> 10.128.0.3 14:10:11 | ! I0322 14:10:11.077079 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:10:11 | ! I0322 14:10:11.077414 17163 round_trippers.go:383] GET https://10.128.0.3:8443/api/v1/services?timeout=1s 14:10:11 | ! I0322 14:10:11.077437 17163 round_trippers.go:390] Request Headers: 14:10:11 | ! I0322 14:10:11.077442 17163 round_trippers.go:393] Accept: application/json, */* 14:10:11 | ! I0322 14:10:11.077446 17163 round_trippers.go:393] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format 14:10:11 | ! I0322 14:10:11.080743 17163 round_trippers.go:408] Response Status: 200 OK in 3 milliseconds 14:10:11 | ! I0322 14:10:11.080761 17163 round_trippers.go:411] Response Headers: 14:10:11 | ! I0322 14:10:11.080768 17163 round_trippers.go:414] Content-Type: application/json 14:10:11 | ! I0322 14:10:11.080774 17163 round_trippers.go:414] Content-Length: 3377 14:10:11 | ! I0322 14:10:11.080781 17163 round_trippers.go:414] Date: Fri, 22 Mar 2019 14:10:11 GMT 14:10:11 | ! I0322 14:10:11.081110 17163 request.go:897] Response Body: {"kind":"ServiceList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/services","resourceVersion":"589"},"items":[{"metadata":{"name":"kubernetes","namespace":"default","selfLink":"/api/v1/namespaces/default/services/kubernetes","uid":"e64748b6-4cab-11e9-aacb-42010a800003","resourceVersion":"42","creationTimestamp":"2019-03-22T14:07:59Z","labels":{"component":"apiserver","provider":"kubernetes"}},"spec":{"ports":[{"name":"https","protocol":"TCP","port":443,"targetPort":8443}],"clusterIP":"10.96.0.1","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},{"metadata":{"name":"nginx-svc","namespace":"default","selfLink":"/api/v1/namespaces/default/services/nginx-svc","uid":"16b0d6f1-4cac-11e9-aacb-42010a800003","resourceVersion":"495","creationTimestamp":"2019-03-22T14:09:21Z","labels":{"run":"nginx-svc"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"run\":\"nginx-svc\"},\"name\": [truncated 2353 chars] 14:10:11 | ! I0322 14:10:11.081445 17163 loadbalancer_patcher.go:71] kubernetes is not type LoadBalancer, skipping. 14:10:11 | ! I0322 14:10:11.081463 17163 loadbalancer_patcher.go:74] nginx-svc is type LoadBalancer. 14:10:11 | ! I0322 14:10:11.081470 17163 loadbalancer_patcher.go:71] kube-dns is not type LoadBalancer, skipping. 14:10:11 | ! I0322 14:10:11.081477 17163 loadbalancer_patcher.go:71] kubernetes-dashboard is not type LoadBalancer, skipping. 14:10:11 | ! I0322 14:10:11.081484 17163 tunnel.go:130] sending report id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:10:11 | ! I0322 14:10:11.081542 17163 tunnel_manager.go:108] minikube status: id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Running, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:10:11 | ! I0322 14:10:11.081565 17163 tunnel_manager.go:83] sleep for 5s 14:10:11 | > Status: 14:10:11 | > machine: minikube 14:10:11 | > pid: 17163 14:10:11 | > route: 10.96.0.0/12 -> 10.128.0.3 14:10:11 | > minikube: Running 14:10:11 | > services: [nginx-svc] 14:10:11 | > errors: 14:10:11 | > minikube: no errors 14:10:11 | > router: no errors 14:10:11 | > loadbalancer emulator: no errors --- FAIL: TestFunctional (125.59s) --- PASS: TestFunctional/Status (0.39s) cluster_status_test.go:35: Checking if cluster is healthy. --- FAIL: TestFunctional/Tunnel (4.52s) tunnel_test.go:45: starting tunnel test... tunnel_test.go:62: deploying nginx... tunnel_test.go:83: getting nginx ingress... tunnel_test.go:98: svc should have ingress after tunnel is created, but it was empty! --- PASS: TestFunctional/Addons (0.01s) --- PASS: TestFunctional/Logs (1.24s) --- PASS: TestFunctional/ServicesList (0.16s) --- PASS: TestFunctional/DNS (2.71s) --- PASS: TestFunctional/Provisioning (4.84s) --- PASS: TestFunctional/Dashboard (46.59s) === RUN TestFunctionalContainerd --- SKIP: TestFunctionalContainerd (0.00s) functional_test.go:56: Can't run containerd backend with none driver === RUN TestPersistence --- SKIP: TestPersistence (0.00s) persistence_test.go:34: skipping test as none driver does not support persistence === RUN TestStartStop === RUN TestStartStop/docker+cache 14:10:11 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 config set WantReportErrorPrompt false] 14:10:11 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 delete] 14:10:12 | > # Uninstalling Kubernetes v1.13.4 using kubeadm ... 14:10:16 | ! I0322 14:10:16.081701 17163 tunnel_manager.go:81] waiting for tunnel to be ready for next check 14:10:16 | ! I0322 14:10:16.081741 17163 tunnel_manager.go:100] check received 14:10:16 | ! I0322 14:10:16.081751 17163 tunnel.go:119] updating tunnel status... 14:10:16 | ! I0322 14:10:16.082120 17163 none.go:231] checking for running kubelet ... 14:10:16 | ! I0322 14:10:16.082135 17163 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:10:16 | ! I0322 14:10:16.088618 17163 none.go:125] kubelet not running: running command: systemctl is-active --quiet service kubelet: exit status 3 14:10:16 | ! I0322 14:10:16.088676 17163 tunnel.go:130] sending report id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Stopped, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:10:16 | ! I0322 14:10:16.088738 17163 tunnel_manager.go:108] minikube status: id({10.96.0.0/12 -> 10.128.0.3 minikube 17163}), minikube(Stopped, e:%!s()), route(10.96.0.0/12 -> 10.128.0.3, e:%!s()), services([nginx-svc], e:%!s()) 14:10:16 | ! I0322 14:10:16.088753 17163 tunnel_manager.go:110] minikube status: Stopped, cleaning up and quitting... 14:10:16 | ! I0322 14:10:16.088759 17163 tunnel.go:101] cleaning up 10.96.0.0/12 -> 10.128.0.3 14:10:16 | > Status: 14:10:16 | > machine: minikube 14:10:16 | > pid: 17163 14:10:16 | > route: 10.96.0.0/12 -> 10.128.0.3 14:10:16 | > minikube: Stopped 14:10:16 | > services: [nginx-svc] 14:10:16 | > errors: 14:10:16 | > minikube: no errors 14:10:16 | > router: no errors 14:10:16 | > loadbalancer emulator: no errors 14:10:16 | ! I0322 14:10:16.089932 17163 route_linux.go:98] skipping line: can't parse CIDR from routing table: 10.128.0.1 14:10:16 | ! I0322 14:10:16.089964 17163 route_linux.go:129] Cleaning up route for CIDR 10.96.0.0/12 to gateway 10.128.0.3 14:10:16 | ! I0322 14:10:16.097051 17163 route_linux.go:133] 14:10:16 | ! I0322 14:10:16.097071 17163 registry.go:140] removing tunnel from registry: 10.96.0.0/12 -> 10.128.0.3 14:10:16 | ! I0322 14:10:16.097177 17163 registry.go:156] tunnels after remove: [] Status: machine: minikube pid: 17163 route: 10.96.0.0/12 -> 10.128.0.3 minikube: Running services: [nginx-svc] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 17163 route: 10.96.0.0/12 -> 10.128.0.3 minikube: Running services: [nginx-svc] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 17163 route: 10.96.0.0/12 -> 10.128.0.3 minikube: Running services: [nginx-svc] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 17163 route: 10.96.0.0/12 -> 10.128.0.3 minikube: Running services: [nginx-svc] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 17163 route: 10.96.0.0/12 -> 10.128.0.3 minikube: Running services: [nginx-svc] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 17163 route: 10.96.0.0/12 -> 10.128.0.3 minikube: Running services: [nginx-svc] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 17163 route: 10.96.0.0/12 -> 10.128.0.3 minikube: Running services: [nginx-svc] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 17163 route: 10.96.0.0/12 -> 10.128.0.3 minikube: Running services: [nginx-svc] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 17163 route: 10.96.0.0/12 -> 10.128.0.3 minikube: Running services: [nginx-svc] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 17163 route: 10.96.0.0/12 -> 10.128.0.3 minikube: Running services: [nginx-svc] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 17163 route: 10.96.0.0/12 -> 10.128.0.3 minikube: Stopped services: [nginx-svc] errors: minikube: no errors router: no errors loadbalancer emulator: no errors 14:10:17 | > x Deleting "minikube" from none ... 14:10:17 | > - The "minikube" cluster has been deleted. 14:10:17 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:10:17 | ! I0322 14:10:17.801258 19942 notify.go:126] Checking for updates... 14:10:17 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 start --vm-driver=none --v=10 --logtostderr --bootstrapper=kubeadm --container-runtime=docker --cache-images --alsologtostderr --v=2] 14:10:17 | ! I0322 14:10:17.892786 19952 notify.go:126] Checking for updates... 14:10:17 | > o minikube v0.35.0 on linux (amd64) 14:10:17 | > $ Downloading Kubernetes v1.13.4 images in the background ... 14:10:17 | ! I0322 14:10:17.957697 19952 start.go:605] Saving config: 14:10:17 | ! { 14:10:17 | ! "MachineConfig": { 14:10:17 | ! "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso", 14:10:17 | ! "Memory": 2048, 14:10:17 | ! "CPUs": 2, 14:10:17 | ! "DiskSize": 20000, 14:10:17 | ! "VMDriver": "none", 14:10:17 | ! "ContainerRuntime": "docker", 14:10:17 | ! "HyperkitVpnKitSock": "", 14:10:17 | ! "HyperkitVSockPorts": [], 14:10:17 | ! "XhyveDiskDriver": "ahci-hd", 14:10:17 | ! "DockerEnv": null, 14:10:17 | ! "InsecureRegistry": null, 14:10:17 | ! "RegistryMirror": null, 14:10:17 | ! "HostOnlyCIDR": "192.168.99.1/24", 14:10:17 | ! "HypervVirtualSwitch": "", 14:10:17 | ! "KvmNetwork": "default", 14:10:17 | ! "DockerOpt": null, 14:10:17 | ! "DisableDriverMounts": false, 14:10:17 | ! "NFSShare": [], 14:10:17 | ! "NFSSharesRoot": "/nfsshares", 14:10:17 | ! "UUID": "", 14:10:17 | ! "GPU": false, 14:10:17 | ! "NoVTXCheck": false 14:10:17 | ! }, 14:10:17 | ! "KubernetesConfig": { 14:10:17 | ! "KubernetesVersion": "v1.13.4", 14:10:17 | ! "NodeIP": "", 14:10:17 | ! "NodePort": 8443, 14:10:17 | ! "NodeName": "minikube", 14:10:17 | ! "APIServerName": "minikubeCA", 14:10:17 | ! "APIServerNames": null, 14:10:17 | ! "APIServerIPs": null, 14:10:17 | ! "DNSDomain": "cluster.local", 14:10:17 | ! "ContainerRuntime": "docker", 14:10:17 | ! "CRISocket": "", 14:10:17 | ! "NetworkPlugin": "", 14:10:17 | ! "FeatureGates": "", 14:10:17 | ! "ServiceCIDR": "10.96.0.0/12", 14:10:17 | ! "ImageRepository": "", 14:10:17 | ! "ExtraOptions": null, 14:10:17 | ! "ShouldLoadCachedImages": true, 14:10:17 | ! "EnableDefaultCNI": false 14:10:17 | ! } 14:10:17 | ! } 14:10:17 | ! I0322 14:10:17.957838 19952 cache_images.go:292] Attempting to cache image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 14:10:17 | ! I0322 14:10:17.957869 19952 cluster.go:68] Machine does not exist... provisioning new machine 14:10:17 | ! I0322 14:10:17.957866 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/pause:3.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 14:10:17 | ! I0322 14:10:17.957875 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 14:10:17 | ! I0322 14:10:17.957903 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-proxy-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 14:10:17 | ! I0322 14:10:17.957909 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:10:17 | ! I0322 14:10:17.957879 19952 cluster.go:69] Provisioning machine with config: {MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso Memory:2048 CPUs:2 DiskSize:20000 VMDriver:none ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] XhyveDiskDriver:ahci-hd DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KvmNetwork:default Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: GPU:false NoVTXCheck:false} 14:10:17 | ! I0322 14:10:17.957925 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 14:10:17 | ! I0322 14:10:17.957946 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-scheduler-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 14:10:17 | ! I0322 14:10:17.957958 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/coredns:1.2.6 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 14:10:17 | ! I0322 14:10:17.957946 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/etcd-amd64:3.2.24 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 14:10:17 | ! I0322 14:10:17.957969 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-controller-manager-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 14:10:17 | ! I0322 14:10:17.957980 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 14:10:17 | ! I0322 14:10:17.957985 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-apiserver-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 14:10:17 | ! I0322 14:10:17.957999 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-addon-manager:v8.6 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 14:10:17 | ! I0322 14:10:17.958001 19952 cache_images.go:292] Attempting to cache image: k8s.gcr.io/pause-amd64:3.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 14:10:17 | ! I0322 14:10:17.958040 19952 cache_images.go:83] Successfully cached all images. 14:10:17 | > > Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... 14:10:17 | > - "minikube" IP address is 10.128.0.3 14:10:17 | ! I0322 14:10:17.959359 19952 start.go:605] Saving config: 14:10:17 | ! { 14:10:17 | ! "MachineConfig": { 14:10:17 | ! "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso", 14:10:17 | ! "Memory": 2048, 14:10:17 | ! "CPUs": 2, 14:10:17 | ! "DiskSize": 20000, 14:10:17 | ! "VMDriver": "none", 14:10:17 | ! "ContainerRuntime": "docker", 14:10:17 | ! "HyperkitVpnKitSock": "", 14:10:17 | ! "HyperkitVSockPorts": [], 14:10:17 | ! "XhyveDiskDriver": "ahci-hd", 14:10:17 | ! "DockerEnv": null, 14:10:17 | ! "InsecureRegistry": null, 14:10:17 | ! "RegistryMirror": null, 14:10:17 | ! "HostOnlyCIDR": "192.168.99.1/24", 14:10:17 | ! "HypervVirtualSwitch": "", 14:10:17 | ! "KvmNetwork": "default", 14:10:17 | ! "DockerOpt": null, 14:10:17 | ! "DisableDriverMounts": false, 14:10:17 | ! "NFSShare": [], 14:10:17 | ! "NFSSharesRoot": "/nfsshares", 14:10:17 | ! "UUID": "", 14:10:17 | ! "GPU": false, 14:10:17 | ! "NoVTXCheck": false 14:10:17 | ! }, 14:10:17 | ! "KubernetesConfig": { 14:10:17 | ! "KubernetesVersion": "v1.13.4", 14:10:17 | ! "NodeIP": "10.128.0.3", 14:10:17 | ! "NodePort": 8443, 14:10:17 | ! "NodeName": "minikube", 14:10:17 | ! "APIServerName": "minikubeCA", 14:10:17 | ! "APIServerNames": null, 14:10:17 | ! "APIServerIPs": null, 14:10:17 | ! "DNSDomain": "cluster.local", 14:10:17 | ! "ContainerRuntime": "docker", 14:10:17 | ! "CRISocket": "", 14:10:17 | ! "NetworkPlugin": "", 14:10:17 | ! "FeatureGates": "", 14:10:17 | ! "ServiceCIDR": "10.96.0.0/12", 14:10:17 | ! "ImageRepository": "", 14:10:17 | ! "ExtraOptions": null, 14:10:17 | ! "ShouldLoadCachedImages": true, 14:10:17 | ! "EnableDefaultCNI": false 14:10:17 | ! } 14:10:17 | ! } 14:10:17 | ! I0322 14:10:17.959554 19952 exec_runner.go:39] Run: systemctl is-active --quiet service containerd 14:10:17 | > - Configuring Docker as the container runtime ... 14:10:17 | ! I0322 14:10:17.965330 19952 exec_runner.go:39] Run: systemctl is-active --quiet service crio 14:10:17 | ! I0322 14:10:17.970101 19952 exec_runner.go:39] Run: systemctl is-active --quiet service rkt-api 14:10:17 | ! I0322 14:10:17.974808 19952 exec_runner.go:39] Run: sudo systemctl restart docker 14:10:19 | ! I0322 14:10:19.671648 19952 exec_runner.go:50] Run with output: docker version --format '{{.Server.Version}}' 14:10:19 | > - Version of container runtime is 18.06.1-ce 14:10:19 | > - Preparing Kubernetes environment ... 14:10:19 | ! I0322 14:10:19.740021 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 14:10:19 | ! I0322 14:10:19.740061 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 14:10:19 | ! I0322 14:10:19.744582 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 14:10:19 | ! I0322 14:10:19.744590 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:10:19 | ! I0322 14:10:19.744870 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 14:10:19 | ! I0322 14:10:19.749396 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 14:10:19 | ! I0322 14:10:19.749753 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 14:10:19 | ! I0322 14:10:19.749978 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 14:10:19 | ! I0322 14:10:19.749984 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 14:10:19 | ! I0322 14:10:19.757219 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 14:10:19 | ! I0322 14:10:19.757451 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 14:10:19 | ! I0322 14:10:19.758106 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 14:10:19 | ! I0322 14:10:19.759004 19952 docker.go:89] Loading image: /tmp/pause_3.1 14:10:19 | ! I0322 14:10:19.759086 19952 exec_runner.go:39] Run: docker load -i /tmp/pause_3.1 14:10:19 | ! I0322 14:10:19.766403 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 14:10:19 | ! I0322 14:10:19.772803 19952 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 14:10:20 | ! I0322 14:10:20.012517 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/pause_3.1 14:10:20 | ! I0322 14:10:20.019817 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache 14:10:20 | ! I0322 14:10:20.019853 19952 docker.go:89] Loading image: /tmp/coredns_1.2.6 14:10:20 | ! I0322 14:10:20.019860 19952 exec_runner.go:39] Run: docker load -i /tmp/coredns_1.2.6 14:10:20 | ! I0322 14:10:20.185418 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/coredns_1.2.6 14:10:20 | ! I0322 14:10:20.194776 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 from cache 14:10:20 | ! I0322 14:10:20.194815 19952 docker.go:89] Loading image: /tmp/pause-amd64_3.1 14:10:20 | ! I0322 14:10:20.194823 19952 exec_runner.go:39] Run: docker load -i /tmp/pause-amd64_3.1 14:10:20 | ! I0322 14:10:20.328689 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/pause-amd64_3.1 14:10:20 | ! I0322 14:10:20.335752 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 from cache 14:10:20 | ! I0322 14:10:20.335794 19952 docker.go:89] Loading image: /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:10:20 | ! I0322 14:10:20.335803 19952 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:10:20 | ! I0322 14:10:20.502581 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:10:20 | ! I0322 14:10:20.512795 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 from cache 14:10:20 | ! I0322 14:10:20.512837 19952 docker.go:89] Loading image: /tmp/k8s-dns-sidecar-amd64_1.14.8 14:10:20 | ! I0322 14:10:20.512858 19952 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-sidecar-amd64_1.14.8 14:10:20 | ! I0322 14:10:20.677722 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 14:10:20 | ! I0322 14:10:20.686660 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 from cache 14:10:20 | ! I0322 14:10:20.686712 19952 docker.go:89] Loading image: /tmp/storage-provisioner_v1.8.1 14:10:20 | ! I0322 14:10:20.686721 19952 exec_runner.go:39] Run: docker load -i /tmp/storage-provisioner_v1.8.1 14:10:20 | ! I0322 14:10:20.847567 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/storage-provisioner_v1.8.1 14:10:20 | ! I0322 14:10:20.858599 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache 14:10:20 | ! I0322 14:10:20.858649 19952 docker.go:89] Loading image: /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:10:20 | ! I0322 14:10:20.858663 19952 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:10:21 | ! I0322 14:10:21.013724 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:10:21 | ! I0322 14:10:21.022990 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 from cache 14:10:21 | ! I0322 14:10:21.023034 19952 docker.go:89] Loading image: /tmp/kube-addon-manager_v8.6 14:10:21 | ! I0322 14:10:21.023045 19952 exec_runner.go:39] Run: docker load -i /tmp/kube-addon-manager_v8.6 14:10:21 | ! I0322 14:10:21.191483 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-addon-manager_v8.6 14:10:21 | ! I0322 14:10:21.202747 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 from cache 14:10:21 | ! I0322 14:10:21.202793 19952 docker.go:89] Loading image: /tmp/kube-apiserver-amd64_v1.13.4 14:10:21 | ! I0322 14:10:21.202803 19952 exec_runner.go:39] Run: docker load -i /tmp/kube-apiserver-amd64_v1.13.4 14:10:21 | ! I0322 14:10:21.432634 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-apiserver-amd64_v1.13.4 14:10:21 | ! I0322 14:10:21.447726 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 from cache 14:10:21 | ! I0322 14:10:21.447770 19952 docker.go:89] Loading image: /tmp/kube-proxy-amd64_v1.13.4 14:10:21 | ! I0322 14:10:21.447779 19952 exec_runner.go:39] Run: docker load -i /tmp/kube-proxy-amd64_v1.13.4 14:10:21 | ! I0322 14:10:21.620470 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-proxy-amd64_v1.13.4 14:10:21 | ! I0322 14:10:21.632540 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 from cache 14:10:21 | ! I0322 14:10:21.632587 19952 docker.go:89] Loading image: /tmp/kube-scheduler-amd64_v1.13.4 14:10:21 | ! I0322 14:10:21.632596 19952 exec_runner.go:39] Run: docker load -i /tmp/kube-scheduler-amd64_v1.13.4 14:10:21 | ! I0322 14:10:21.817345 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-scheduler-amd64_v1.13.4 14:10:21 | ! I0322 14:10:21.830910 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 from cache 14:10:21 | ! I0322 14:10:21.830959 19952 docker.go:89] Loading image: /tmp/etcd-amd64_3.2.24 14:10:21 | ! I0322 14:10:21.830969 19952 exec_runner.go:39] Run: docker load -i /tmp/etcd-amd64_3.2.24 14:10:22 | ! I0322 14:10:22.055847 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/etcd-amd64_3.2.24 14:10:22 | ! I0322 14:10:22.076419 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 from cache 14:10:22 | ! I0322 14:10:22.076467 19952 docker.go:89] Loading image: /tmp/kubernetes-dashboard-amd64_v1.10.1 14:10:22 | ! I0322 14:10:22.076477 19952 exec_runner.go:39] Run: docker load -i /tmp/kubernetes-dashboard-amd64_v1.10.1 14:10:22 | ! I0322 14:10:22.266123 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 14:10:22 | ! I0322 14:10:22.282314 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 from cache 14:10:22 | ! I0322 14:10:22.282356 19952 docker.go:89] Loading image: /tmp/kube-controller-manager-amd64_v1.13.4 14:10:22 | ! I0322 14:10:22.282370 19952 exec_runner.go:39] Run: docker load -i /tmp/kube-controller-manager-amd64_v1.13.4 14:10:22 | ! I0322 14:10:22.503777 19952 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 14:10:22 | ! I0322 14:10:22.519271 19952 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 from cache 14:10:22 | ! I0322 14:10:22.519369 19952 cache_images.go:109] Successfully loaded all cached images. 14:10:22 | ! I0322 14:10:22.519643 19952 kubeadm.go:452] kubelet v1.13.4 config: 14:10:22 | ! [Unit] 14:10:22 | ! Wants=docker.socket 14:10:22 | ! [Service] 14:10:22 | ! ExecStart= 14:10:22 | ! ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests 14:10:22 | ! [Install] 14:10:22 | ! I0322 14:10:22.653324 19952 exec_runner.go:39] Run: 14:10:22 | ! sudo systemctl daemon-reload && 14:10:22 | ! sudo systemctl enable kubelet && 14:10:22 | ! sudo systemctl start kubelet 14:10:22 | ! I0322 14:10:22.795678 19952 certs.go:46] Setting up certificates for IP: 10.128.0.3 14:10:22 | ! I0322 14:10:22.810552 19952 kubeconfig.go:127] Using kubeconfig: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig 14:10:22 | > : Waiting for image downloads to complete ... 14:10:22 | > - Pulling images required by Kubernetes v1.13.4 ... 14:10:22 | ! I0322 14:10:22.812181 19952 exec_runner.go:39] Run: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml 14:10:24 | > - Launching Kubernetes v1.13.4 using kubeadm ... 14:10:24 | ! I0322 14:10:24.108372 19952 exec_runner.go:50] Run with output: 14:10:24 | ! sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI 14:10:56 | ! I0322 14:10:56.660038 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ... 14:10:56 | ! I0322 14:10:56.679206 19952 kubernetes.go:134] Found 0 Pods for label selector component=kube-apiserver 14:12:08 | ! I0322 14:12:08.684383 19952 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver 14:12:15 | ! I0322 14:12:15.683856 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:12:15 | ! I0322 14:12:15.686850 19952 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:12:15 | ! I0322 14:12:15.686907 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ... 14:12:15 | ! I0322 14:12:15.689794 19952 kubernetes.go:134] Found 1 Pods for label selector component=etcd 14:12:15 | ! I0322 14:12:15.689884 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ... 14:12:15 | ! I0322 14:12:15.692613 19952 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler 14:12:15 | ! I0322 14:12:15.692689 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ... 14:12:15 | ! I0322 14:12:15.695410 19952 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager 14:12:15 | ! I0322 14:12:15.695467 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-addon-manager" ... 14:12:15 | ! I0322 14:12:15.698588 19952 kubernetes.go:134] Found 1 Pods for label selector component=kube-addon-manager 14:12:15 | ! I0322 14:12:15.698637 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ... 14:12:15 | ! I0322 14:12:15.701614 19952 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns 14:12:15 | > : Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns 14:12:15 | > - Configuring cluster permissions ... 14:12:15 | ! I0322 14:12:15.712024 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ... 14:12:15 | ! I0322 14:12:15.715380 19952 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver 14:12:15 | ! I0322 14:12:15.715416 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:12:15 | ! I0322 14:12:15.718302 19952 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:12:15 | ! I0322 14:12:15.718341 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ... 14:12:15 | ! I0322 14:12:15.720994 19952 kubernetes.go:134] Found 1 Pods for label selector component=etcd 14:12:15 | ! I0322 14:12:15.721029 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ... 14:12:15 | ! I0322 14:12:15.723908 19952 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler 14:12:15 | ! I0322 14:12:15.723939 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ... 14:12:15 | ! I0322 14:12:15.728200 19952 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager 14:12:15 | ! I0322 14:12:15.728240 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-addon-manager" ... 14:12:15 | ! I0322 14:12:15.731317 19952 kubernetes.go:134] Found 1 Pods for label selector component=kube-addon-manager 14:12:15 | ! I0322 14:12:15.731359 19952 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ... 14:12:15 | ! I0322 14:12:15.734676 19952 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns 14:12:15 | ! I0322 14:12:15.734753 19952 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:12:15 | ! I0322 14:12:15.757564 19952 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:12:15 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc000630840 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000ee400 TLS:0xc0005e0160} 14:12:15 | > - Verifying component health ..... 14:12:15 | > > Configuring local host environment ... 14:12:15 | ! ! The 'none' driver provides limited isolation and may reduce system security and reliability. 14:12:15 | ! ! For more information, see: 14:12:15 | > - https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md 14:12:15 | ! ! kubectl and minikube configuration will be stored in /home/jenkins 14:12:15 | ! ! To use kubectl or minikube commands as your own user, you may 14:12:15 | ! ! need to relocate them. For example, to overwrite your own settings: 14:12:15 | > - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME 14:12:15 | > - sudo chown -R $USER $HOME/.kube $HOME/.minikube 14:12:15 | > i This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true 14:12:15 | > + kubectl is now configured to use "minikube" 14:12:15 | > = Done! Thank you for using minikube! 14:12:15 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:12:15 | ! I0322 14:12:15.793196 23529 notify.go:126] Checking for updates... 14:12:15 | ! I0322 14:12:15.866746 23529 none.go:231] checking for running kubelet ... 14:12:15 | ! I0322 14:12:15.866784 23529 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:12:15 | ! I0322 14:12:15.873762 23529 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:12:15 | ! I0322 14:12:15.886570 23529 interface.go:360] Looking for default routes with IPv4 addresses 14:12:15 | ! I0322 14:12:15.886593 23529 interface.go:365] Default route transits interface "eth0" 14:12:15 | ! I0322 14:12:15.886884 23529 interface.go:174] Interface eth0 is up 14:12:15 | ! I0322 14:12:15.886950 23529 interface.go:222] Interface "eth0" has 1 addresses :[10.128.0.3/32]. 14:12:15 | ! I0322 14:12:15.886965 23529 interface.go:189] Checking addr 10.128.0.3/32. 14:12:15 | ! I0322 14:12:15.886971 23529 interface.go:196] IP found 10.128.0.3 14:12:15 | ! I0322 14:12:15.886978 23529 interface.go:228] Found valid IPv4 address 10.128.0.3 for interface "eth0". 14:12:15 | ! I0322 14:12:15.886986 23529 interface.go:371] Found active IP 10.128.0.3 14:12:15 | ! I0322 14:12:15.893797 23529 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:12:15 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc0006daa40 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001d4d00 TLS:0xc0000c6bb0} 14:12:15 | > Running 14:12:15 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 ip] 14:12:16 | > 10.128.0.3 14:12:16 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 stop] 14:12:16 | > : Stopping "minikube" in none ... 14:12:26 | > - "minikube" stopped. 14:12:26 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:12:26 | ! I0322 14:12:26.513245 24469 notify.go:126] Checking for updates... 14:12:26 | ! I0322 14:12:26.578554 24469 none.go:231] checking for running kubelet ... 14:12:26 | ! I0322 14:12:26.578576 24469 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:12:26 | ! I0322 14:12:26.584020 24469 none.go:125] kubelet not running: running command: systemctl is-active --quiet service kubelet: exit status 3 14:12:26 | > Stopped 14:12:26 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 start --vm-driver=none --v=10 --logtostderr --bootstrapper=kubeadm --container-runtime=docker --cache-images --alsologtostderr --v=2] 14:12:26 | ! I0322 14:12:26.610929 24482 notify.go:126] Checking for updates... 14:12:26 | > o minikube v0.35.0 on linux (amd64) 14:12:26 | > $ Downloading Kubernetes v1.13.4 images in the background ... 14:12:26 | ! I0322 14:12:26.676292 24482 start.go:605] Saving config: 14:12:26 | ! { 14:12:26 | ! "MachineConfig": { 14:12:26 | ! "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso", 14:12:26 | ! "Memory": 2048, 14:12:26 | ! "CPUs": 2, 14:12:26 | ! "DiskSize": 20000, 14:12:26 | ! "VMDriver": "none", 14:12:26 | ! "ContainerRuntime": "docker", 14:12:26 | ! "HyperkitVpnKitSock": "", 14:12:26 | ! "HyperkitVSockPorts": [], 14:12:26 | ! "XhyveDiskDriver": "ahci-hd", 14:12:26 | ! "DockerEnv": null, 14:12:26 | ! "InsecureRegistry": null, 14:12:26 | ! "RegistryMirror": null, 14:12:26 | ! "HostOnlyCIDR": "192.168.99.1/24", 14:12:26 | ! "HypervVirtualSwitch": "", 14:12:26 | ! "KvmNetwork": "default", 14:12:26 | ! "DockerOpt": null, 14:12:26 | ! "DisableDriverMounts": false, 14:12:26 | ! "NFSShare": [], 14:12:26 | ! "NFSSharesRoot": "/nfsshares", 14:12:26 | ! "UUID": "", 14:12:26 | ! "GPU": false, 14:12:26 | ! "NoVTXCheck": false 14:12:26 | ! }, 14:12:26 | ! "KubernetesConfig": { 14:12:26 | ! "KubernetesVersion": "v1.13.4", 14:12:26 | ! "NodeIP": "", 14:12:26 | ! "NodePort": 8443, 14:12:26 | ! "NodeName": "minikube", 14:12:26 | ! "APIServerName": "minikubeCA", 14:12:26 | ! "APIServerNames": null, 14:12:26 | ! "APIServerIPs": null, 14:12:26 | ! "DNSDomain": "cluster.local", 14:12:26 | ! "ContainerRuntime": "docker", 14:12:26 | ! "CRISocket": "", 14:12:26 | ! "NetworkPlugin": "", 14:12:26 | ! "FeatureGates": "", 14:12:26 | ! "ServiceCIDR": "10.96.0.0/12", 14:12:26 | ! "ImageRepository": "", 14:12:26 | ! "ExtraOptions": null, 14:12:26 | ! "ShouldLoadCachedImages": true, 14:12:26 | ! "EnableDefaultCNI": false 14:12:26 | ! } 14:12:26 | ! } 14:12:26 | ! I0322 14:12:26.676672 24482 cache_images.go:292] Attempting to cache image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 14:12:26 | ! I0322 14:12:26.676707 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-proxy-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 14:12:26 | ! I0322 14:12:26.676723 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-scheduler-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 14:12:26 | ! I0322 14:12:26.676725 24482 cluster.go:73] Skipping create...Using existing machine configuration 14:12:26 | ! I0322 14:12:26.676737 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-controller-manager-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 14:12:26 | ! I0322 14:12:26.676753 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-apiserver-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 14:12:26 | ! I0322 14:12:26.676767 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/pause-amd64:3.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 14:12:26 | ! I0322 14:12:26.676883 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/pause:3.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 14:12:26 | ! I0322 14:12:26.676936 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 14:12:26 | ! I0322 14:12:26.676951 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:12:26 | ! I0322 14:12:26.676974 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 14:12:26 | ! I0322 14:12:26.676992 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/etcd-amd64:3.2.24 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 14:12:26 | ! I0322 14:12:26.677006 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/coredns:1.2.6 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 14:12:26 | ! I0322 14:12:26.677020 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 14:12:26 | ! I0322 14:12:26.677035 24482 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-addon-manager:v8.6 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 14:12:26 | ! I0322 14:12:26.677066 24482 cache_images.go:83] Successfully cached all images. 14:12:26 | > i Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one. 14:12:26 | ! I0322 14:12:26.677362 24482 none.go:231] checking for running kubelet ... 14:12:26 | ! I0322 14:12:26.677370 24482 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:12:26 | ! I0322 14:12:26.683143 24482 none.go:125] kubelet not running: running command: systemctl is-active --quiet service kubelet: exit status 3 14:12:26 | ! I0322 14:12:26.683213 24482 cluster.go:92] Machine state: Stopped 14:12:26 | > : Restarting existing none VM for "minikube" ... 14:12:26 | ! I0322 14:12:26.684138 24482 cluster.go:110] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[] Ipv6:false InsecureRegistry:[10.96.0.0/12] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:} 14:12:26 | > : Waiting for SSH access ... 14:12:26 | > - "minikube" IP address is 10.128.0.3 14:12:26 | ! I0322 14:12:26.684669 24482 start.go:605] Saving config: 14:12:26 | ! { 14:12:26 | ! "MachineConfig": { 14:12:26 | ! "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso", 14:12:26 | ! "Memory": 2048, 14:12:26 | ! "CPUs": 2, 14:12:26 | ! "DiskSize": 20000, 14:12:26 | ! "VMDriver": "none", 14:12:26 | ! "ContainerRuntime": "docker", 14:12:26 | ! "HyperkitVpnKitSock": "", 14:12:26 | ! "HyperkitVSockPorts": [], 14:12:26 | ! "XhyveDiskDriver": "ahci-hd", 14:12:26 | ! "DockerEnv": null, 14:12:26 | ! "InsecureRegistry": null, 14:12:26 | ! "RegistryMirror": null, 14:12:26 | ! "HostOnlyCIDR": "192.168.99.1/24", 14:12:26 | ! "HypervVirtualSwitch": "", 14:12:26 | ! "KvmNetwork": "default", 14:12:26 | ! "DockerOpt": null, 14:12:26 | ! "DisableDriverMounts": false, 14:12:26 | ! "NFSShare": [], 14:12:26 | ! "NFSSharesRoot": "/nfsshares", 14:12:26 | ! "UUID": "", 14:12:26 | ! "GPU": false, 14:12:26 | ! "NoVTXCheck": false 14:12:26 | ! }, 14:12:26 | ! "KubernetesConfig": { 14:12:26 | ! "KubernetesVersion": "v1.13.4", 14:12:26 | ! "NodeIP": "10.128.0.3", 14:12:26 | ! "NodePort": 8443, 14:12:26 | ! "NodeName": "minikube", 14:12:26 | ! "APIServerName": "minikubeCA", 14:12:26 | ! "APIServerNames": null, 14:12:26 | ! "APIServerIPs": null, 14:12:26 | ! "DNSDomain": "cluster.local", 14:12:26 | ! "ContainerRuntime": "docker", 14:12:26 | ! "CRISocket": "", 14:12:26 | ! "NetworkPlugin": "", 14:12:26 | ! "FeatureGates": "", 14:12:26 | ! "ServiceCIDR": "10.96.0.0/12", 14:12:26 | ! "ImageRepository": "", 14:12:26 | ! "ExtraOptions": null, 14:12:26 | ! "ShouldLoadCachedImages": true, 14:12:26 | ! "EnableDefaultCNI": false 14:12:26 | ! } 14:12:26 | ! } 14:12:26 | > - Configuring Docker as the container runtime ... 14:12:26 | ! I0322 14:12:26.685006 24482 exec_runner.go:39] Run: systemctl is-active --quiet service containerd 14:12:26 | ! I0322 14:12:26.690104 24482 exec_runner.go:39] Run: systemctl is-active --quiet service crio 14:12:26 | ! I0322 14:12:26.694813 24482 exec_runner.go:39] Run: systemctl is-active --quiet service rkt-api 14:12:26 | ! I0322 14:12:26.699493 24482 exec_runner.go:39] Run: sudo systemctl restart docker 14:12:28 | ! I0322 14:12:28.626587 24482 exec_runner.go:50] Run with output: docker version --format '{{.Server.Version}}' 14:12:28 | > - Version of container runtime is 18.06.1-ce 14:12:28 | > - Preparing Kubernetes environment ... 14:12:28 | ! I0322 14:12:28.696620 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 14:12:28 | ! I0322 14:12:28.696639 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 14:12:28 | ! I0322 14:12:28.696726 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 14:12:28 | ! I0322 14:12:28.698834 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 14:12:28 | ! I0322 14:12:28.703328 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 14:12:28 | ! I0322 14:12:28.704109 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 14:12:28 | ! I0322 14:12:28.720315 24482 docker.go:89] Loading image: /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:12:28 | ! I0322 14:12:28.720569 24482 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:12:28 | ! I0322 14:12:28.724333 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:12:28 | ! I0322 14:12:28.737709 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 14:12:28 | ! I0322 14:12:28.746318 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 14:12:28 | ! I0322 14:12:28.753650 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 14:12:28 | ! I0322 14:12:28.753766 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 14:12:28 | ! I0322 14:12:28.754079 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 14:12:28 | ! I0322 14:12:28.754233 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 14:12:28 | ! I0322 14:12:28.765803 24482 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 14:12:28 | ! I0322 14:12:28.966538 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:12:28 | ! I0322 14:12:28.978939 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 from cache 14:12:28 | ! I0322 14:12:28.978996 24482 docker.go:89] Loading image: /tmp/storage-provisioner_v1.8.1 14:12:28 | ! I0322 14:12:28.979007 24482 exec_runner.go:39] Run: docker load -i /tmp/storage-provisioner_v1.8.1 14:12:29 | ! I0322 14:12:29.161039 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/storage-provisioner_v1.8.1 14:12:29 | ! I0322 14:12:29.173570 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache 14:12:29 | ! I0322 14:12:29.173641 24482 docker.go:89] Loading image: /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:12:29 | ! I0322 14:12:29.173650 24482 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:12:29 | ! I0322 14:12:29.341431 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:12:29 | ! I0322 14:12:29.351090 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 from cache 14:12:29 | ! I0322 14:12:29.351152 24482 docker.go:89] Loading image: /tmp/pause-amd64_3.1 14:12:29 | ! I0322 14:12:29.351172 24482 exec_runner.go:39] Run: docker load -i /tmp/pause-amd64_3.1 14:12:29 | ! I0322 14:12:29.508550 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/pause-amd64_3.1 14:12:29 | ! I0322 14:12:29.517347 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 from cache 14:12:29 | ! I0322 14:12:29.517404 24482 docker.go:89] Loading image: /tmp/k8s-dns-sidecar-amd64_1.14.8 14:12:29 | ! I0322 14:12:29.517413 24482 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-sidecar-amd64_1.14.8 14:12:29 | ! I0322 14:12:29.688593 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 14:12:29 | ! I0322 14:12:29.698379 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 from cache 14:12:29 | ! I0322 14:12:29.698433 24482 docker.go:89] Loading image: /tmp/pause_3.1 14:12:29 | ! I0322 14:12:29.698441 24482 exec_runner.go:39] Run: docker load -i /tmp/pause_3.1 14:12:29 | ! I0322 14:12:29.861071 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/pause_3.1 14:12:29 | ! I0322 14:12:29.868864 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache 14:12:29 | ! I0322 14:12:29.868930 24482 docker.go:89] Loading image: /tmp/kube-apiserver-amd64_v1.13.4 14:12:29 | ! I0322 14:12:29.868940 24482 exec_runner.go:39] Run: docker load -i /tmp/kube-apiserver-amd64_v1.13.4 14:12:30 | ! I0322 14:12:30.071242 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-apiserver-amd64_v1.13.4 14:12:30 | ! I0322 14:12:30.089228 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 from cache 14:12:30 | ! I0322 14:12:30.089292 24482 docker.go:89] Loading image: /tmp/kube-addon-manager_v8.6 14:12:30 | ! I0322 14:12:30.089300 24482 exec_runner.go:39] Run: docker load -i /tmp/kube-addon-manager_v8.6 14:12:30 | ! I0322 14:12:30.268925 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-addon-manager_v8.6 14:12:30 | ! I0322 14:12:30.281004 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 from cache 14:12:30 | ! I0322 14:12:30.281068 24482 docker.go:89] Loading image: /tmp/coredns_1.2.6 14:12:30 | ! I0322 14:12:30.281076 24482 exec_runner.go:39] Run: docker load -i /tmp/coredns_1.2.6 14:12:30 | ! I0322 14:12:30.447235 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/coredns_1.2.6 14:12:30 | ! I0322 14:12:30.456997 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 from cache 14:12:30 | ! I0322 14:12:30.457048 24482 docker.go:89] Loading image: /tmp/kube-scheduler-amd64_v1.13.4 14:12:30 | ! I0322 14:12:30.457055 24482 exec_runner.go:39] Run: docker load -i /tmp/kube-scheduler-amd64_v1.13.4 14:12:30 | ! I0322 14:12:30.659239 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-scheduler-amd64_v1.13.4 14:12:30 | ! I0322 14:12:30.673045 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 from cache 14:12:30 | ! I0322 14:12:30.673103 24482 docker.go:89] Loading image: /tmp/kube-controller-manager-amd64_v1.13.4 14:12:30 | ! I0322 14:12:30.673111 24482 exec_runner.go:39] Run: docker load -i /tmp/kube-controller-manager-amd64_v1.13.4 14:12:30 | ! I0322 14:12:30.882470 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 14:12:30 | ! I0322 14:12:30.898037 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 from cache 14:12:30 | ! I0322 14:12:30.898086 24482 docker.go:89] Loading image: /tmp/kube-proxy-amd64_v1.13.4 14:12:30 | ! I0322 14:12:30.898097 24482 exec_runner.go:39] Run: docker load -i /tmp/kube-proxy-amd64_v1.13.4 14:12:31 | ! I0322 14:12:31.094465 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-proxy-amd64_v1.13.4 14:12:31 | ! I0322 14:12:31.108379 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 from cache 14:12:31 | ! I0322 14:12:31.108457 24482 docker.go:89] Loading image: /tmp/kubernetes-dashboard-amd64_v1.10.1 14:12:31 | ! I0322 14:12:31.108467 24482 exec_runner.go:39] Run: docker load -i /tmp/kubernetes-dashboard-amd64_v1.10.1 14:12:31 | ! I0322 14:12:31.333745 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 14:12:31 | ! I0322 14:12:31.349400 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 from cache 14:12:31 | ! I0322 14:12:31.349454 24482 docker.go:89] Loading image: /tmp/etcd-amd64_3.2.24 14:12:31 | ! I0322 14:12:31.349465 24482 exec_runner.go:39] Run: docker load -i /tmp/etcd-amd64_3.2.24 14:12:31 | ! I0322 14:12:31.581014 24482 exec_runner.go:39] Run: sudo rm -rf /tmp/etcd-amd64_3.2.24 14:12:31 | ! I0322 14:12:31.599411 24482 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 from cache 14:12:31 | ! I0322 14:12:31.599469 24482 cache_images.go:109] Successfully loaded all cached images. 14:12:31 | ! I0322 14:12:31.599685 24482 kubeadm.go:452] kubelet v1.13.4 config: 14:12:31 | ! [Unit] 14:12:31 | ! Wants=docker.socket 14:12:31 | ! [Service] 14:12:31 | ! ExecStart= 14:12:31 | ! ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests 14:12:31 | ! [Install] 14:12:31 | ! I0322 14:12:31.732407 24482 exec_runner.go:39] Run: 14:12:31 | ! sudo systemctl daemon-reload && 14:12:31 | ! sudo systemctl enable kubelet && 14:12:31 | ! sudo systemctl start kubelet 14:12:31 | ! I0322 14:12:31.866192 24482 certs.go:46] Setting up certificates for IP: 10.128.0.3 14:12:31 | > : Waiting for image downloads to complete ... 14:12:31 | ! I0322 14:12:31.877587 24482 kubeconfig.go:127] Using kubeconfig: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig 14:12:31 | ! I0322 14:12:31.878725 24482 exec_runner.go:39] Run: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml 14:12:31 | > - Pulling images required by Kubernetes v1.13.4 ... 14:12:33 | > : Relaunching Kubernetes v1.13.4 using kubeadm ... 14:12:33 | ! I0322 14:12:33.419460 24482 exec_runner.go:39] Run: sudo kubeadm init phase certs all --config /var/lib/kubeadm.yaml 14:12:33 | ! I0322 14:12:33.695283 24482 exec_runner.go:39] Run: sudo kubeadm init phase kubeconfig all --config /var/lib/kubeadm.yaml 14:12:35 | ! I0322 14:12:35.567243 24482 exec_runner.go:39] Run: sudo kubeadm init phase control-plane all --config /var/lib/kubeadm.yaml 14:12:35 | ! I0322 14:12:35.627268 24482 exec_runner.go:39] Run: sudo kubeadm init phase etcd local --config /var/lib/kubeadm.yaml 14:12:35 | ! I0322 14:12:35.682797 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ... 14:12:41 | ! I0322 14:12:41.492973 24482 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver 14:12:41 | ! I0322 14:12:41.493902 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:12:41 | ! I0322 14:12:41.500115 24482 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:12:41 | ! I0322 14:12:41.500190 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ... 14:12:41 | ! I0322 14:12:41.504614 24482 kubernetes.go:134] Found 1 Pods for label selector component=etcd 14:12:41 | ! I0322 14:12:41.504692 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ... 14:12:41 | ! I0322 14:12:41.511099 24482 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler 14:12:41 | ! I0322 14:12:41.511169 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ... 14:12:41 | ! I0322 14:12:41.515792 24482 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager 14:12:41 | ! I0322 14:12:41.515839 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-addon-manager" ... 14:12:41 | ! I0322 14:12:41.525110 24482 kubernetes.go:134] Found 1 Pods for label selector component=kube-addon-manager 14:12:41 | ! I0322 14:12:41.525181 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ... 14:12:41 | ! I0322 14:12:41.530611 24482 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns 14:12:41 | > : Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns 14:12:41 | > : Updating kube-proxy configuration ... 14:12:41 | ! I0322 14:12:41.533069 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:12:41 | ! I0322 14:12:41.536852 24482 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:12:41 | ! I0322 14:12:41.541079 24482 util.go:174] kube-proxy config: apiVersion: v1 14:12:41 | ! kind: Config 14:12:41 | ! clusters: 14:12:41 | ! - cluster: 14:12:41 | ! certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 14:12:41 | ! server: https://localhost:8443 14:12:41 | ! name: default 14:12:41 | ! contexts: 14:12:41 | ! - context: 14:12:41 | ! cluster: default 14:12:41 | ! namespace: default 14:12:41 | ! user: default 14:12:41 | ! name: default 14:12:41 | ! current-context: default 14:12:41 | ! users: 14:12:41 | ! - name: default 14:12:41 | ! user: 14:12:41 | ! tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token 14:12:41 | ! I0322 14:12:41.541233 24482 util.go:194] updated kube-proxy config: apiVersion: v1 14:12:41 | ! kind: Config 14:12:41 | ! clusters: 14:12:41 | ! - cluster: 14:12:41 | ! certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 14:12:41 | ! server: https://10.128.0.3:8443 14:12:41 | ! name: default 14:12:41 | ! contexts: 14:12:41 | ! - context: 14:12:41 | ! cluster: default 14:12:41 | ! namespace: default 14:12:41 | ! user: default 14:12:41 | ! name: default 14:12:41 | ! current-context: default 14:12:41 | ! users: 14:12:41 | ! - name: default 14:12:41 | ! user: 14:12:41 | ! tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token 14:12:41 | ! I0322 14:12:41.555997 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:12:41 | ! I0322 14:12:41.565864 24482 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:12:41 | ! I0322 14:12:41.568133 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ... 14:12:41 | ! I0322 14:12:41.571255 24482 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver 14:12:41 | ! I0322 14:12:41.571289 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:12:41 | ! I0322 14:12:41.578674 24482 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:12:41 | ! I0322 14:12:41.578714 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ... 14:12:41 | ! I0322 14:12:41.581819 24482 kubernetes.go:134] Found 1 Pods for label selector component=etcd 14:12:41 | ! I0322 14:12:41.581849 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ... 14:12:41 | ! I0322 14:12:41.584608 24482 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler 14:12:41 | ! I0322 14:12:41.584638 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ... 14:12:41 | ! I0322 14:12:41.587940 24482 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager 14:12:41 | ! I0322 14:12:41.587977 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-addon-manager" ... 14:12:41 | ! I0322 14:12:41.590841 24482 kubernetes.go:134] Found 1 Pods for label selector component=kube-addon-manager 14:12:41 | ! I0322 14:12:41.590871 24482 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ... 14:12:41 | ! I0322 14:12:41.625731 24482 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns 14:12:41 | ! I0322 14:12:41.626088 24482 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:12:41 | ! I0322 14:12:41.652223 24482 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:500 Internal Server Error StatusCode:500 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff] Date:[Fri, 22 Mar 2019 14:12:41 GMT] Content-Length:[816]] Body:0xc0006f7940 ContentLength:816 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0006e8200 TLS:0xc0004e62c0} 14:12:41 | ! I0322 14:12:41.652326 24482 utils.go:125] error: Temporary Error: apiserver status=Error err= - sleeping 10s 14:12:51 | ! I0322 14:12:51.652530 24482 utils.go:114] retry loop 1 14:12:51 | ! I0322 14:12:51.658301 24482 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:12:51 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc00048f600 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000537000 TLS:0xc00003a790} 14:12:51 | > - Verifying component health ...... 14:12:51 | > > Configuring local host environment ... 14:12:51 | ! ! The 'none' driver provides limited isolation and may reduce system security and reliability. 14:12:51 | ! ! For more information, see: 14:12:51 | > - https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md 14:12:51 | ! ! kubectl and minikube configuration will be stored in /home/jenkins 14:12:51 | > - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME 14:12:51 | ! ! To use kubectl or minikube commands as your own user, you may 14:12:51 | > - sudo chown -R $USER $HOME/.kube $HOME/.minikube 14:12:51 | ! ! need to relocate them. For example, to overwrite your own settings: 14:12:51 | > i This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true 14:12:51 | > + kubectl is now configured to use "minikube" 14:12:51 | > = Done! Thank you for using minikube! 14:12:51 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:12:51 | ! I0322 14:12:51.690580 26406 notify.go:126] Checking for updates... 14:12:51 | ! I0322 14:12:51.756804 26406 none.go:231] checking for running kubelet ... 14:12:51 | ! I0322 14:12:51.756824 26406 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:12:51 | ! I0322 14:12:51.762958 26406 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:12:51 | ! I0322 14:12:51.773501 26406 interface.go:360] Looking for default routes with IPv4 addresses 14:12:51 | ! I0322 14:12:51.773518 26406 interface.go:365] Default route transits interface "eth0" 14:12:51 | ! I0322 14:12:51.773744 26406 interface.go:174] Interface eth0 is up 14:12:51 | ! I0322 14:12:51.773802 26406 interface.go:222] Interface "eth0" has 1 addresses :[10.128.0.3/32]. 14:12:51 | ! I0322 14:12:51.773823 26406 interface.go:189] Checking addr 10.128.0.3/32. 14:12:51 | ! I0322 14:12:51.773832 26406 interface.go:196] IP found 10.128.0.3 14:12:51 | ! I0322 14:12:51.773840 26406 interface.go:228] Found valid IPv4 address 10.128.0.3 for interface "eth0". 14:12:51 | ! I0322 14:12:51.773846 26406 interface.go:371] Found active IP 10.128.0.3 14:12:51 | ! I0322 14:12:51.779690 26406 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:12:51 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc0003bc380 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000f1900 TLS:0xc00044f080} 14:12:51 | > Running 14:12:51 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 delete] 14:12:51 | > # Uninstalling Kubernetes v1.13.4 using kubeadm ... 14:12:57 | > x Deleting "minikube" from none ... 14:12:57 | > - The "minikube" cluster has been deleted. 14:12:57 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:12:57 | ! I0322 14:12:57.841973 27623 notify.go:126] Checking for updates... === RUN TestStartStop/docker+cache+ignore_verifications 14:12:57 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 config set WantReportErrorPrompt false] 14:12:58 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 delete] 14:12:58 | > ? "minikube" cluster does not exist 14:12:58 | > ? "minikube" profile does not exist 14:12:58 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:12:58 | ! I0322 14:12:58.145623 27658 notify.go:126] Checking for updates... 14:12:58 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 start --vm-driver=none --v=10 --logtostderr --bootstrapper=kubeadm --container-runtime=docker --cache-images --extra-config kubeadm.ignore-preflight-errors=SystemVerification --alsologtostderr --v=2] 14:12:58 | ! I0322 14:12:58.248976 27669 notify.go:126] Checking for updates... 14:12:58 | > o minikube v0.35.0 on linux (amd64) 14:12:58 | ! I0322 14:12:58.323427 27669 start.go:605] Saving config: 14:12:58 | ! { 14:12:58 | ! "MachineConfig": { 14:12:58 | ! "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso", 14:12:58 | ! "Memory": 2048, 14:12:58 | ! "CPUs": 2, 14:12:58 | ! "DiskSize": 20000, 14:12:58 | ! "VMDriver": "none", 14:12:58 | ! "ContainerRuntime": "docker", 14:12:58 | ! "HyperkitVpnKitSock": "", 14:12:58 | ! "HyperkitVSockPorts": [], 14:12:58 | ! "XhyveDiskDriver": "ahci-hd", 14:12:58 | ! "DockerEnv": null, 14:12:58 | ! "InsecureRegistry": null, 14:12:58 | ! "RegistryMirror": null, 14:12:58 | ! "HostOnlyCIDR": "192.168.99.1/24", 14:12:58 | ! "HypervVirtualSwitch": "", 14:12:58 | ! "KvmNetwork": "default", 14:12:58 | ! "DockerOpt": null, 14:12:58 | ! "DisableDriverMounts": false, 14:12:58 | ! "NFSShare": [], 14:12:58 | ! "NFSSharesRoot": "/nfsshares", 14:12:58 | ! "UUID": "", 14:12:58 | ! "GPU": false, 14:12:58 | ! "NoVTXCheck": false 14:12:58 | ! }, 14:12:58 | ! "KubernetesConfig": { 14:12:58 | ! "KubernetesVersion": "v1.13.4", 14:12:58 | ! "NodeIP": "", 14:12:58 | ! "NodePort": 8443, 14:12:58 | ! "NodeName": "minikube", 14:12:58 | ! "APIServerName": "minikubeCA", 14:12:58 | ! "APIServerNames": null, 14:12:58 | ! "APIServerIPs": null, 14:12:58 | ! "DNSDomain": "cluster.local", 14:12:58 | ! "ContainerRuntime": "docker", 14:12:58 | ! "CRISocket": "", 14:12:58 | ! "NetworkPlugin": "", 14:12:58 | ! "FeatureGates": "", 14:12:58 | ! "ServiceCIDR": "10.96.0.0/12", 14:12:58 | ! "ImageRepository": "", 14:12:58 | ! "ExtraOptions": [ 14:12:58 | ! { 14:12:58 | ! "Component": "kubeadm", 14:12:58 | ! "Key": "ignore-preflight-errors", 14:12:58 | ! "Value": "SystemVerification" 14:12:58 | ! } 14:12:58 | ! ], 14:12:58 | ! "ShouldLoadCachedImages": true, 14:12:58 | ! "EnableDefaultCNI": false 14:12:58 | ! } 14:12:58 | ! } 14:12:58 | ! I0322 14:12:58.323530 27669 cache_images.go:292] Attempting to cache image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 14:12:58 | ! I0322 14:12:58.323571 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-proxy-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 14:12:58 | > $ Downloading Kubernetes v1.13.4 images in the background ... 14:12:58 | > > Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... 14:12:58 | > - "minikube" IP address is 10.128.0.3 14:12:58 | > - Configuring Docker as the container runtime ... 14:12:58 | ! I0322 14:12:58.323582 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-scheduler-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 14:12:58 | ! I0322 14:12:58.323593 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-controller-manager-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 14:12:58 | ! I0322 14:12:58.323594 27669 cluster.go:68] Machine does not exist... provisioning new machine 14:12:58 | ! I0322 14:12:58.323603 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-apiserver-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 14:12:58 | ! I0322 14:12:58.323637 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/pause-amd64:3.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 14:12:58 | ! I0322 14:12:58.323619 27669 cluster.go:69] Provisioning machine with config: {MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso Memory:2048 CPUs:2 DiskSize:20000 VMDriver:none ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] XhyveDiskDriver:ahci-hd DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KvmNetwork:default Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: GPU:false NoVTXCheck:false} 14:12:58 | ! I0322 14:12:58.323648 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/pause:3.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 14:12:58 | ! I0322 14:12:58.323660 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 14:12:58 | ! I0322 14:12:58.323669 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:12:58 | ! I0322 14:12:58.323679 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 14:12:58 | ! I0322 14:12:58.323688 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/etcd-amd64:3.2.24 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 14:12:58 | ! I0322 14:12:58.323697 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/coredns:1.2.6 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 14:12:58 | ! I0322 14:12:58.323706 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 14:12:58 | ! I0322 14:12:58.323714 27669 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-addon-manager:v8.6 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 14:12:58 | ! I0322 14:12:58.323722 27669 cache_images.go:83] Successfully cached all images. 14:12:58 | ! I0322 14:12:58.324997 27669 start.go:605] Saving config: 14:12:58 | ! { 14:12:58 | ! "MachineConfig": { 14:12:58 | ! "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso", 14:12:58 | ! "Memory": 2048, 14:12:58 | ! "CPUs": 2, 14:12:58 | ! "DiskSize": 20000, 14:12:58 | ! "VMDriver": "none", 14:12:58 | ! "ContainerRuntime": "docker", 14:12:58 | ! "HyperkitVpnKitSock": "", 14:12:58 | ! "HyperkitVSockPorts": [], 14:12:58 | ! "XhyveDiskDriver": "ahci-hd", 14:12:58 | ! "DockerEnv": null, 14:12:58 | ! "InsecureRegistry": null, 14:12:58 | ! "RegistryMirror": null, 14:12:58 | ! "HostOnlyCIDR": "192.168.99.1/24", 14:12:58 | ! "HypervVirtualSwitch": "", 14:12:58 | ! "KvmNetwork": "default", 14:12:58 | ! "DockerOpt": null, 14:12:58 | ! "DisableDriverMounts": false, 14:12:58 | ! "NFSShare": [], 14:12:58 | ! "NFSSharesRoot": "/nfsshares", 14:12:58 | ! "UUID": "", 14:12:58 | ! "GPU": false, 14:12:58 | ! "NoVTXCheck": false 14:12:58 | ! }, 14:12:58 | ! "KubernetesConfig": { 14:12:58 | ! "KubernetesVersion": "v1.13.4", 14:12:58 | ! "NodeIP": "10.128.0.3", 14:12:58 | ! "NodePort": 8443, 14:12:58 | ! "NodeName": "minikube", 14:12:58 | ! "APIServerName": "minikubeCA", 14:12:58 | ! "APIServerNames": null, 14:12:58 | ! "APIServerIPs": null, 14:12:58 | ! "DNSDomain": "cluster.local", 14:12:58 | ! "ContainerRuntime": "docker", 14:12:58 | ! "CRISocket": "", 14:12:58 | ! "NetworkPlugin": "", 14:12:58 | ! "FeatureGates": "", 14:12:58 | ! "ServiceCIDR": "10.96.0.0/12", 14:12:58 | ! "ImageRepository": "", 14:12:58 | ! "ExtraOptions": [ 14:12:58 | ! { 14:12:58 | ! "Component": "kubeadm", 14:12:58 | ! "Key": "ignore-preflight-errors", 14:12:58 | ! "Value": "SystemVerification" 14:12:58 | ! } 14:12:58 | ! ], 14:12:58 | ! "ShouldLoadCachedImages": true, 14:12:58 | ! "EnableDefaultCNI": false 14:12:58 | ! } 14:12:58 | ! } 14:12:58 | ! I0322 14:12:58.325170 27669 exec_runner.go:39] Run: systemctl is-active --quiet service containerd 14:12:58 | ! I0322 14:12:58.332963 27669 exec_runner.go:39] Run: systemctl is-active --quiet service crio 14:12:58 | ! I0322 14:12:58.338782 27669 exec_runner.go:39] Run: systemctl is-active --quiet service rkt-api 14:12:58 | ! I0322 14:12:58.344242 27669 exec_runner.go:39] Run: sudo systemctl restart docker 14:13:00 | ! I0322 14:13:00.061346 27669 exec_runner.go:50] Run with output: docker version --format '{{.Server.Version}}' 14:13:00 | > - Version of container runtime is 18.06.1-ce 14:13:00 | > - Preparing Kubernetes environment ... 14:13:00 | > - kubeadm.ignore-preflight-errors=SystemVerification 14:13:00 | ! I0322 14:13:00.131548 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 14:13:00 | ! I0322 14:13:00.131572 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 14:13:00 | ! I0322 14:13:00.131594 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 14:13:00 | ! I0322 14:13:00.136726 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 14:13:00 | ! I0322 14:13:00.131548 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 14:13:00 | ! I0322 14:13:00.141345 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 14:13:00 | ! I0322 14:13:00.144874 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 14:13:00 | ! I0322 14:13:00.141581 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 14:13:00 | ! I0322 14:13:00.143121 27669 docker.go:89] Loading image: /tmp/pause-amd64_3.1 14:13:00 | ! I0322 14:13:00.148904 27669 exec_runner.go:39] Run: docker load -i /tmp/pause-amd64_3.1 14:13:00 | ! I0322 14:13:00.149081 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 14:13:00 | ! I0322 14:13:00.149644 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 14:13:00 | ! I0322 14:13:00.156709 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 14:13:00 | ! I0322 14:13:00.164174 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:13:00 | ! I0322 14:13:00.164814 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 14:13:00 | ! I0322 14:13:00.222980 27669 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 14:13:00 | ! I0322 14:13:00.393039 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/pause-amd64_3.1 14:13:00 | ! I0322 14:13:00.400972 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 from cache 14:13:00 | ! I0322 14:13:00.401027 27669 docker.go:89] Loading image: /tmp/pause_3.1 14:13:00 | ! I0322 14:13:00.401038 27669 exec_runner.go:39] Run: docker load -i /tmp/pause_3.1 14:13:00 | ! I0322 14:13:00.554266 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/pause_3.1 14:13:00 | ! I0322 14:13:00.561325 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache 14:13:00 | ! I0322 14:13:00.561394 27669 docker.go:89] Loading image: /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:13:00 | ! I0322 14:13:00.561403 27669 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:13:00 | ! I0322 14:13:00.730396 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:13:00 | ! I0322 14:13:00.739524 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 from cache 14:13:00 | ! I0322 14:13:00.739577 27669 docker.go:89] Loading image: /tmp/storage-provisioner_v1.8.1 14:13:00 | ! I0322 14:13:00.739585 27669 exec_runner.go:39] Run: docker load -i /tmp/storage-provisioner_v1.8.1 14:13:00 | ! I0322 14:13:00.924081 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/storage-provisioner_v1.8.1 14:13:00 | ! I0322 14:13:00.936996 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache 14:13:00 | ! I0322 14:13:00.937050 27669 docker.go:89] Loading image: /tmp/k8s-dns-sidecar-amd64_1.14.8 14:13:00 | ! I0322 14:13:00.937060 27669 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-sidecar-amd64_1.14.8 14:13:01 | ! I0322 14:13:01.101702 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 14:13:01 | ! I0322 14:13:01.111924 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 from cache 14:13:01 | ! I0322 14:13:01.111986 27669 docker.go:89] Loading image: /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:13:01 | ! I0322 14:13:01.111995 27669 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:13:01 | ! I0322 14:13:01.294331 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:13:01 | ! I0322 14:13:01.304760 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 from cache 14:13:01 | ! I0322 14:13:01.304811 27669 docker.go:89] Loading image: /tmp/kube-addon-manager_v8.6 14:13:01 | ! I0322 14:13:01.304823 27669 exec_runner.go:39] Run: docker load -i /tmp/kube-addon-manager_v8.6 14:13:01 | ! I0322 14:13:01.504648 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-addon-manager_v8.6 14:13:01 | ! I0322 14:13:01.515518 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 from cache 14:13:01 | ! I0322 14:13:01.515595 27669 docker.go:89] Loading image: /tmp/coredns_1.2.6 14:13:01 | ! I0322 14:13:01.515615 27669 exec_runner.go:39] Run: docker load -i /tmp/coredns_1.2.6 14:13:01 | ! I0322 14:13:01.691246 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/coredns_1.2.6 14:13:01 | ! I0322 14:13:01.702607 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 from cache 14:13:01 | ! I0322 14:13:01.702655 27669 docker.go:89] Loading image: /tmp/kube-scheduler-amd64_v1.13.4 14:13:01 | ! I0322 14:13:01.702663 27669 exec_runner.go:39] Run: docker load -i /tmp/kube-scheduler-amd64_v1.13.4 14:13:02 | ! I0322 14:13:02.927325 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-scheduler-amd64_v1.13.4 14:13:02 | ! I0322 14:13:02.940758 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 from cache 14:13:02 | ! I0322 14:13:02.940832 27669 docker.go:89] Loading image: /tmp/kube-controller-manager-amd64_v1.13.4 14:13:02 | ! I0322 14:13:02.940841 27669 exec_runner.go:39] Run: docker load -i /tmp/kube-controller-manager-amd64_v1.13.4 14:13:03 | ! I0322 14:13:03.156586 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 14:13:03 | ! I0322 14:13:03.172052 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 from cache 14:13:03 | ! I0322 14:13:03.172106 27669 docker.go:89] Loading image: /tmp/kubernetes-dashboard-amd64_v1.10.1 14:13:03 | ! I0322 14:13:03.172118 27669 exec_runner.go:39] Run: docker load -i /tmp/kubernetes-dashboard-amd64_v1.10.1 14:13:03 | ! I0322 14:13:03.384934 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 14:13:03 | ! I0322 14:13:03.401783 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 from cache 14:13:03 | ! I0322 14:13:03.401841 27669 docker.go:89] Loading image: /tmp/etcd-amd64_3.2.24 14:13:03 | ! I0322 14:13:03.401850 27669 exec_runner.go:39] Run: docker load -i /tmp/etcd-amd64_3.2.24 14:13:03 | ! I0322 14:13:03.632977 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/etcd-amd64_3.2.24 14:13:03 | ! I0322 14:13:03.652606 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 from cache 14:13:03 | ! I0322 14:13:03.652680 27669 docker.go:89] Loading image: /tmp/kube-proxy-amd64_v1.13.4 14:13:03 | ! I0322 14:13:03.652689 27669 exec_runner.go:39] Run: docker load -i /tmp/kube-proxy-amd64_v1.13.4 14:13:03 | ! I0322 14:13:03.831035 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-proxy-amd64_v1.13.4 14:13:03 | ! I0322 14:13:03.844067 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 from cache 14:13:03 | ! I0322 14:13:03.844123 27669 docker.go:89] Loading image: /tmp/kube-apiserver-amd64_v1.13.4 14:13:03 | ! I0322 14:13:03.844130 27669 exec_runner.go:39] Run: docker load -i /tmp/kube-apiserver-amd64_v1.13.4 14:13:04 | ! I0322 14:13:04.054294 27669 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-apiserver-amd64_v1.13.4 14:13:04 | ! I0322 14:13:04.070161 27669 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 from cache 14:13:04 | ! I0322 14:13:04.070220 27669 cache_images.go:109] Successfully loaded all cached images. 14:13:04 | ! I0322 14:13:04.070488 27669 kubeadm.go:452] kubelet v1.13.4 config: 14:13:04 | ! [Unit] 14:13:04 | ! Wants=docker.socket 14:13:04 | ! [Service] 14:13:04 | ! ExecStart= 14:13:04 | ! ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests 14:13:04 | ! [Install] 14:13:04 | ! I0322 14:13:04.221855 27669 exec_runner.go:39] Run: 14:13:04 | ! sudo systemctl daemon-reload && 14:13:04 | ! sudo systemctl enable kubelet && 14:13:04 | ! sudo systemctl start kubelet 14:13:04 | ! I0322 14:13:04.359317 27669 certs.go:46] Setting up certificates for IP: 10.128.0.3 14:13:04 | > : Waiting for image downloads to complete ... 14:13:04 | ! I0322 14:13:04.373623 27669 kubeconfig.go:127] Using kubeconfig: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig 14:13:04 | > - Pulling images required by Kubernetes v1.13.4 ... 14:13:04 | ! I0322 14:13:04.374994 27669 exec_runner.go:39] Run: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml 14:13:05 | > - Launching Kubernetes v1.13.4 using kubeadm ... 14:13:05 | ! I0322 14:13:05.831379 27669 exec_runner.go:50] Run with output: 14:13:05 | ! sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI 14:13:38 | ! I0322 14:13:38.816231 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ... 14:13:38 | ! I0322 14:13:38.833628 27669 kubernetes.go:134] Found 0 Pods for label selector component=kube-apiserver 14:14:27 | ! I0322 14:14:27.337616 27669 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver 14:14:37 | ! I0322 14:14:37.338370 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:14:37 | ! I0322 14:14:37.341866 27669 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:14:37 | ! I0322 14:14:37.341935 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ... 14:14:37 | ! I0322 14:14:37.344349 27669 kubernetes.go:134] Found 0 Pods for label selector component=etcd 14:14:43 | ! I0322 14:14:43.348039 27669 kubernetes.go:134] Found 1 Pods for label selector component=etcd 14:14:47 | ! I0322 14:14:47.348863 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ... 14:14:47 | ! I0322 14:14:47.352367 27669 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler 14:14:47 | ! I0322 14:14:47.352430 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ... 14:14:47 | ! I0322 14:14:47.355071 27669 kubernetes.go:134] Found 0 Pods for label selector component=kube-controller-manager 14:14:48 | ! I0322 14:14:48.359908 27669 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager 14:14:57 | ! I0322 14:14:57.359479 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-addon-manager" ... 14:14:57 | ! I0322 14:14:57.362899 27669 kubernetes.go:134] Found 1 Pods for label selector component=kube-addon-manager 14:14:57 | ! I0322 14:14:57.362978 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ... 14:14:57 | > : Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns 14:14:57 | > - Configuring cluster permissions ... 14:14:57 | ! I0322 14:14:57.366201 27669 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns 14:14:57 | ! I0322 14:14:57.378763 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ... 14:14:57 | ! I0322 14:14:57.381859 27669 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver 14:14:57 | ! I0322 14:14:57.381894 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:14:57 | ! I0322 14:14:57.384494 27669 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:14:57 | ! I0322 14:14:57.384528 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ... 14:14:57 | ! I0322 14:14:57.387668 27669 kubernetes.go:134] Found 1 Pods for label selector component=etcd 14:14:57 | ! I0322 14:14:57.387714 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ... 14:14:57 | ! I0322 14:14:57.390724 27669 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler 14:14:57 | ! I0322 14:14:57.390759 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ... 14:14:57 | ! I0322 14:14:57.393646 27669 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager 14:14:57 | ! I0322 14:14:57.393688 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-addon-manager" ... 14:14:57 | ! I0322 14:14:57.396539 27669 kubernetes.go:134] Found 1 Pods for label selector component=kube-addon-manager 14:14:57 | ! I0322 14:14:57.396566 27669 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ... 14:14:57 | ! I0322 14:14:57.399470 27669 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns 14:14:57 | ! I0322 14:14:57.399538 27669 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:14:57 | ! I0322 14:14:57.418475 27669 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:14:57 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc00080d040 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000100c00 TLS:0xc0002cc160} 14:14:57 | > - Verifying component health ..... 14:14:57 | > > Configuring local host environment ... 14:14:57 | ! ! The 'none' driver provides limited isolation and may reduce system security and reliability. 14:14:57 | ! ! For more information, see: 14:14:57 | > - https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md 14:14:57 | ! ! kubectl and minikube configuration will be stored in /home/jenkins 14:14:57 | ! ! To use kubectl or minikube commands as your own user, you may 14:14:57 | ! ! need to relocate them. For example, to overwrite your own settings: 14:14:57 | > - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME 14:14:57 | > - sudo chown -R $USER $HOME/.kube $HOME/.minikube 14:14:57 | > i This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true 14:14:57 | > + kubectl is now configured to use "minikube" 14:14:57 | > = Done! Thank you for using minikube! 14:14:57 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:14:57 | ! I0322 14:14:57.453433 31271 notify.go:126] Checking for updates... 14:14:57 | ! I0322 14:14:57.527955 31271 none.go:231] checking for running kubelet ... 14:14:57 | ! I0322 14:14:57.527989 31271 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:14:57 | ! I0322 14:14:57.534717 31271 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:14:57 | ! I0322 14:14:57.546208 31271 interface.go:360] Looking for default routes with IPv4 addresses 14:14:57 | ! I0322 14:14:57.546234 31271 interface.go:365] Default route transits interface "eth0" 14:14:57 | ! I0322 14:14:57.546522 31271 interface.go:174] Interface eth0 is up 14:14:57 | ! I0322 14:14:57.546637 31271 interface.go:222] Interface "eth0" has 1 addresses :[10.128.0.3/32]. 14:14:57 | ! I0322 14:14:57.546655 31271 interface.go:189] Checking addr 10.128.0.3/32. 14:14:57 | ! I0322 14:14:57.546662 31271 interface.go:196] IP found 10.128.0.3 14:14:57 | ! I0322 14:14:57.546668 31271 interface.go:228] Found valid IPv4 address 10.128.0.3 for interface "eth0". 14:14:57 | ! I0322 14:14:57.546673 31271 interface.go:371] Found active IP 10.128.0.3 14:14:57 | ! I0322 14:14:57.553106 31271 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:14:57 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc00043d100 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a3000 TLS:0xc0000c6b00} 14:14:57 | > Running 14:14:57 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 ip] 14:14:57 | > 10.128.0.3 14:14:57 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 stop] 14:14:57 | > : Stopping "minikube" in none ... 14:15:08 | > - "minikube" stopped. 14:15:08 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:15:08 | ! I0322 14:15:08.191634 32215 notify.go:126] Checking for updates... 14:15:08 | ! I0322 14:15:08.257791 32215 none.go:231] checking for running kubelet ... 14:15:08 | ! I0322 14:15:08.257812 32215 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:15:08 | ! I0322 14:15:08.263221 32215 none.go:125] kubelet not running: running command: systemctl is-active --quiet service kubelet: exit status 3 14:15:08 | > Stopped 14:15:08 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 start --vm-driver=none --v=10 --logtostderr --bootstrapper=kubeadm --container-runtime=docker --cache-images --extra-config kubeadm.ignore-preflight-errors=SystemVerification --alsologtostderr --v=2] 14:15:08 | ! I0322 14:15:08.290312 32228 notify.go:126] Checking for updates... 14:15:08 | > o minikube v0.35.0 on linux (amd64) 14:15:08 | > $ Downloading Kubernetes v1.13.4 images in the background ... 14:15:08 | ! I0322 14:15:08.355187 32228 start.go:605] Saving config: 14:15:08 | ! { 14:15:08 | ! "MachineConfig": { 14:15:08 | ! "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso", 14:15:08 | ! "Memory": 2048, 14:15:08 | ! "CPUs": 2, 14:15:08 | ! "DiskSize": 20000, 14:15:08 | ! "VMDriver": "none", 14:15:08 | ! "ContainerRuntime": "docker", 14:15:08 | ! "HyperkitVpnKitSock": "", 14:15:08 | ! "HyperkitVSockPorts": [], 14:15:08 | ! "XhyveDiskDriver": "ahci-hd", 14:15:08 | ! "DockerEnv": null, 14:15:08 | ! "InsecureRegistry": null, 14:15:08 | ! "RegistryMirror": null, 14:15:08 | ! "HostOnlyCIDR": "192.168.99.1/24", 14:15:08 | ! "HypervVirtualSwitch": "", 14:15:08 | ! "KvmNetwork": "default", 14:15:08 | ! "DockerOpt": null, 14:15:08 | ! "DisableDriverMounts": false, 14:15:08 | ! "NFSShare": [], 14:15:08 | ! "NFSSharesRoot": "/nfsshares", 14:15:08 | ! "UUID": "", 14:15:08 | ! "GPU": false, 14:15:08 | ! "NoVTXCheck": false 14:15:08 | ! }, 14:15:08 | ! "KubernetesConfig": { 14:15:08 | ! "KubernetesVersion": "v1.13.4", 14:15:08 | ! "NodeIP": "", 14:15:08 | ! "NodePort": 8443, 14:15:08 | ! "NodeName": "minikube", 14:15:08 | ! "APIServerName": "minikubeCA", 14:15:08 | ! "APIServerNames": null, 14:15:08 | ! "APIServerIPs": null, 14:15:08 | ! "DNSDomain": "cluster.local", 14:15:08 | ! "ContainerRuntime": "docker", 14:15:08 | ! "CRISocket": "", 14:15:08 | ! "NetworkPlugin": "", 14:15:08 | ! "FeatureGates": "", 14:15:08 | ! "ServiceCIDR": "10.96.0.0/12", 14:15:08 | ! "ImageRepository": "", 14:15:08 | ! "ExtraOptions": [ 14:15:08 | ! { 14:15:08 | ! "Component": "kubeadm", 14:15:08 | ! "Key": "ignore-preflight-errors", 14:15:08 | ! "Value": "SystemVerification" 14:15:08 | ! } 14:15:08 | ! ], 14:15:08 | ! "ShouldLoadCachedImages": true, 14:15:08 | ! "EnableDefaultCNI": false 14:15:08 | ! } 14:15:08 | ! } 14:15:08 | ! I0322 14:15:08.355286 32228 cache_images.go:292] Attempting to cache image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 14:15:08 | ! I0322 14:15:08.355318 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:15:08 | ! I0322 14:15:08.355334 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 14:15:08 | ! I0322 14:15:08.355352 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/etcd-amd64:3.2.24 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 14:15:08 | ! I0322 14:15:08.355366 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/coredns:1.2.6 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 14:15:08 | ! I0322 14:15:08.355380 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 14:15:08 | ! I0322 14:15:08.355392 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-addon-manager:v8.6 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 14:15:08 | ! I0322 14:15:08.355407 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-controller-manager-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 14:15:08 | ! I0322 14:15:08.355422 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-proxy-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 14:15:08 | ! I0322 14:15:08.355436 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-scheduler-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 14:15:08 | ! I0322 14:15:08.355436 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 14:15:08 | ! I0322 14:15:08.355450 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/pause-amd64:3.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 14:15:08 | ! I0322 14:15:08.355462 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/pause:3.1 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 14:15:08 | ! I0322 14:15:08.355466 32228 cache_images.go:292] Attempting to cache image: k8s.gcr.io/kube-apiserver-amd64:v1.13.4 at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 14:15:08 | ! I0322 14:15:08.355477 32228 cache_images.go:83] Successfully cached all images. 14:15:08 | ! I0322 14:15:08.355724 32228 cluster.go:73] Skipping create...Using existing machine configuration 14:15:08 | > i Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one. 14:15:08 | ! I0322 14:15:08.356314 32228 none.go:231] checking for running kubelet ... 14:15:08 | ! I0322 14:15:08.356321 32228 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:15:08 | ! I0322 14:15:08.362088 32228 none.go:125] kubelet not running: running command: systemctl is-active --quiet service kubelet: exit status 3 14:15:08 | ! I0322 14:15:08.362116 32228 cluster.go:92] Machine state: Stopped 14:15:08 | > : Restarting existing none VM for "minikube" ... 14:15:08 | ! I0322 14:15:08.363086 32228 cluster.go:110] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[] Ipv6:false InsecureRegistry:[10.96.0.0/12] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:} 14:15:08 | > : Waiting for SSH access ... 14:15:08 | > - "minikube" IP address is 10.128.0.3 14:15:08 | ! I0322 14:15:08.363528 32228 start.go:605] Saving config: 14:15:08 | ! { 14:15:08 | ! "MachineConfig": { 14:15:08 | ! "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso", 14:15:08 | ! "Memory": 2048, 14:15:08 | ! "CPUs": 2, 14:15:08 | ! "DiskSize": 20000, 14:15:08 | ! "VMDriver": "none", 14:15:08 | ! "ContainerRuntime": "docker", 14:15:08 | ! "HyperkitVpnKitSock": "", 14:15:08 | ! "HyperkitVSockPorts": [], 14:15:08 | ! "XhyveDiskDriver": "ahci-hd", 14:15:08 | ! "DockerEnv": null, 14:15:08 | ! "InsecureRegistry": null, 14:15:08 | ! "RegistryMirror": null, 14:15:08 | ! "HostOnlyCIDR": "192.168.99.1/24", 14:15:08 | ! "HypervVirtualSwitch": "", 14:15:08 | ! "KvmNetwork": "default", 14:15:08 | ! "DockerOpt": null, 14:15:08 | ! "DisableDriverMounts": false, 14:15:08 | ! "NFSShare": [], 14:15:08 | ! "NFSSharesRoot": "/nfsshares", 14:15:08 | ! "UUID": "", 14:15:08 | ! "GPU": false, 14:15:08 | ! "NoVTXCheck": false 14:15:08 | ! }, 14:15:08 | ! "KubernetesConfig": { 14:15:08 | ! "KubernetesVersion": "v1.13.4", 14:15:08 | ! "NodeIP": "10.128.0.3", 14:15:08 | ! "NodePort": 8443, 14:15:08 | ! "NodeName": "minikube", 14:15:08 | ! "APIServerName": "minikubeCA", 14:15:08 | ! "APIServerNames": null, 14:15:08 | ! "APIServerIPs": null, 14:15:08 | ! "DNSDomain": "cluster.local", 14:15:08 | ! "ContainerRuntime": "docker", 14:15:08 | ! "CRISocket": "", 14:15:08 | ! "NetworkPlugin": "", 14:15:08 | ! "FeatureGates": "", 14:15:08 | ! "ServiceCIDR": "10.96.0.0/12", 14:15:08 | ! "ImageRepository": "", 14:15:08 | ! "ExtraOptions": [ 14:15:08 | ! { 14:15:08 | ! "Component": "kubeadm", 14:15:08 | ! "Key": "ignore-preflight-errors", 14:15:08 | ! "Value": "SystemVerification" 14:15:08 | ! } 14:15:08 | ! ], 14:15:08 | ! "ShouldLoadCachedImages": true, 14:15:08 | ! "EnableDefaultCNI": false 14:15:08 | ! } 14:15:08 | ! } 14:15:08 | ! I0322 14:15:08.363776 32228 exec_runner.go:39] Run: systemctl is-active --quiet service containerd 14:15:08 | > - Configuring Docker as the container runtime ... 14:15:08 | ! I0322 14:15:08.369078 32228 exec_runner.go:39] Run: systemctl is-active --quiet service crio 14:15:08 | ! I0322 14:15:08.373788 32228 exec_runner.go:39] Run: systemctl is-active --quiet service rkt-api 14:15:08 | ! I0322 14:15:08.378285 32228 exec_runner.go:39] Run: sudo systemctl restart docker 14:15:10 | ! I0322 14:15:10.248466 32228 exec_runner.go:50] Run with output: docker version --format '{{.Server.Version}}' 14:15:10 | > - Version of container runtime is 18.06.1-ce 14:15:10 | > - Preparing Kubernetes environment ... 14:15:10 | > - kubeadm.ignore-preflight-errors=SystemVerification 14:15:10 | ! I0322 14:15:10.319875 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 14:15:10 | ! I0322 14:15:10.319923 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 14:15:10 | ! I0322 14:15:10.319925 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 14:15:10 | ! I0322 14:15:10.324758 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 14:15:10 | ! I0322 14:15:10.328958 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 14:15:10 | ! I0322 14:15:10.333684 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 14:15:10 | ! I0322 14:15:10.335120 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 14:15:10 | ! I0322 14:15:10.335755 32228 docker.go:89] Loading image: /tmp/pause-amd64_3.1 14:15:10 | ! I0322 14:15:10.335772 32228 exec_runner.go:39] Run: docker load -i /tmp/pause-amd64_3.1 14:15:10 | ! I0322 14:15:10.347069 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:15:10 | ! I0322 14:15:10.361926 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 14:15:10 | ! I0322 14:15:10.362430 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 14:15:10 | ! I0322 14:15:10.370704 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 14:15:10 | ! I0322 14:15:10.370956 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 14:15:10 | ! I0322 14:15:10.370992 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 14:15:10 | ! I0322 14:15:10.371009 32228 cache_images.go:203] Loading image from cache at /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 14:15:10 | ! I0322 14:15:10.561430 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/pause-amd64_3.1 14:15:10 | ! I0322 14:15:10.568947 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause-amd64_3.1 from cache 14:15:10 | ! I0322 14:15:10.568995 32228 docker.go:89] Loading image: /tmp/storage-provisioner_v1.8.1 14:15:10 | ! I0322 14:15:10.569002 32228 exec_runner.go:39] Run: docker load -i /tmp/storage-provisioner_v1.8.1 14:15:10 | ! I0322 14:15:10.739418 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/storage-provisioner_v1.8.1 14:15:10 | ! I0322 14:15:10.751559 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache 14:15:10 | ! I0322 14:15:10.751612 32228 docker.go:89] Loading image: /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:15:10 | ! I0322 14:15:10.751621 32228 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:15:10 | ! I0322 14:15:10.905668 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 14:15:10 | ! I0322 14:15:10.915612 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.8 from cache 14:15:10 | ! I0322 14:15:10.915668 32228 docker.go:89] Loading image: /tmp/pause_3.1 14:15:10 | ! I0322 14:15:10.915678 32228 exec_runner.go:39] Run: docker load -i /tmp/pause_3.1 14:15:11 | ! I0322 14:15:11.059488 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/pause_3.1 14:15:11 | ! I0322 14:15:11.066707 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache 14:15:11 | ! I0322 14:15:11.066754 32228 docker.go:89] Loading image: /tmp/kube-scheduler-amd64_v1.13.4 14:15:11 | ! I0322 14:15:11.066763 32228 exec_runner.go:39] Run: docker load -i /tmp/kube-scheduler-amd64_v1.13.4 14:15:11 | ! I0322 14:15:11.266080 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-scheduler-amd64_v1.13.4 14:15:11 | ! I0322 14:15:11.278365 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-scheduler-amd64_v1.13.4 from cache 14:15:11 | ! I0322 14:15:11.278413 32228 docker.go:89] Loading image: /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:15:11 | ! I0322 14:15:11.278421 32228 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:15:11 | ! I0322 14:15:11.467300 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 14:15:11 | ! I0322 14:15:11.477527 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.8 from cache 14:15:11 | ! I0322 14:15:11.477576 32228 docker.go:89] Loading image: /tmp/kube-controller-manager-amd64_v1.13.4 14:15:11 | ! I0322 14:15:11.477585 32228 exec_runner.go:39] Run: docker load -i /tmp/kube-controller-manager-amd64_v1.13.4 14:15:11 | ! I0322 14:15:11.702019 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 14:15:11 | ! I0322 14:15:11.716890 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-amd64_v1.13.4 from cache 14:15:11 | ! I0322 14:15:11.716933 32228 docker.go:89] Loading image: /tmp/k8s-dns-sidecar-amd64_1.14.8 14:15:11 | ! I0322 14:15:11.716946 32228 exec_runner.go:39] Run: docker load -i /tmp/k8s-dns-sidecar-amd64_1.14.8 14:15:11 | ! I0322 14:15:11.876880 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 14:15:11 | ! I0322 14:15:11.887903 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.8 from cache 14:15:11 | ! I0322 14:15:11.887968 32228 docker.go:89] Loading image: /tmp/coredns_1.2.6 14:15:11 | ! I0322 14:15:11.887977 32228 exec_runner.go:39] Run: docker load -i /tmp/coredns_1.2.6 14:15:12 | ! I0322 14:15:12.054677 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/coredns_1.2.6 14:15:12 | ! I0322 14:15:12.064321 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/coredns_1.2.6 from cache 14:15:12 | ! I0322 14:15:12.064477 32228 docker.go:89] Loading image: /tmp/kube-apiserver-amd64_v1.13.4 14:15:12 | ! I0322 14:15:12.064492 32228 exec_runner.go:39] Run: docker load -i /tmp/kube-apiserver-amd64_v1.13.4 14:15:12 | ! I0322 14:15:12.277194 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-apiserver-amd64_v1.13.4 14:15:12 | ! I0322 14:15:12.293243 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-apiserver-amd64_v1.13.4 from cache 14:15:12 | ! I0322 14:15:12.293297 32228 docker.go:89] Loading image: /tmp/kube-addon-manager_v8.6 14:15:12 | ! I0322 14:15:12.293305 32228 exec_runner.go:39] Run: docker load -i /tmp/kube-addon-manager_v8.6 14:15:12 | ! I0322 14:15:12.481609 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-addon-manager_v8.6 14:15:12 | ! I0322 14:15:12.492881 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v8.6 from cache 14:15:12 | ! I0322 14:15:12.492942 32228 docker.go:89] Loading image: /tmp/kube-proxy-amd64_v1.13.4 14:15:12 | ! I0322 14:15:12.492954 32228 exec_runner.go:39] Run: docker load -i /tmp/kube-proxy-amd64_v1.13.4 14:15:12 | ! I0322 14:15:12.690642 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/kube-proxy-amd64_v1.13.4 14:15:12 | ! I0322 14:15:12.704315 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kube-proxy-amd64_v1.13.4 from cache 14:15:12 | ! I0322 14:15:12.704378 32228 docker.go:89] Loading image: /tmp/kubernetes-dashboard-amd64_v1.10.1 14:15:12 | ! I0322 14:15:12.704387 32228 exec_runner.go:39] Run: docker load -i /tmp/kubernetes-dashboard-amd64_v1.10.1 14:15:12 | ! I0322 14:15:12.921607 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 14:15:12 | ! I0322 14:15:12.937747 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 from cache 14:15:12 | ! I0322 14:15:12.937818 32228 docker.go:89] Loading image: /tmp/etcd-amd64_3.2.24 14:15:12 | ! I0322 14:15:12.937831 32228 exec_runner.go:39] Run: docker load -i /tmp/etcd-amd64_3.2.24 14:15:13 | ! I0322 14:15:13.181814 32228 exec_runner.go:39] Run: sudo rm -rf /tmp/etcd-amd64_3.2.24 14:15:13 | ! I0322 14:15:13.201742 32228 cache_images.go:234] Successfully loaded image /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/cache/images/k8s.gcr.io/etcd-amd64_3.2.24 from cache 14:15:13 | ! I0322 14:15:13.201805 32228 cache_images.go:109] Successfully loaded all cached images. 14:15:13 | ! I0322 14:15:13.202320 32228 kubeadm.go:452] kubelet v1.13.4 config: 14:15:13 | ! [Unit] 14:15:13 | ! Wants=docker.socket 14:15:13 | ! [Service] 14:15:13 | ! ExecStart= 14:15:13 | ! ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests 14:15:13 | ! [Install] 14:15:13 | ! I0322 14:15:13.343241 32228 exec_runner.go:39] Run: 14:15:13 | ! sudo systemctl daemon-reload && 14:15:13 | ! sudo systemctl enable kubelet && 14:15:13 | ! sudo systemctl start kubelet 14:15:13 | ! I0322 14:15:13.480033 32228 certs.go:46] Setting up certificates for IP: 10.128.0.3 14:15:13 | > : Waiting for image downloads to complete ... 14:15:13 | ! I0322 14:15:13.492390 32228 kubeconfig.go:127] Using kubeconfig: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig 14:15:13 | > - Pulling images required by Kubernetes v1.13.4 ... 14:15:13 | ! I0322 14:15:13.493912 32228 exec_runner.go:39] Run: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml 14:15:15 | ! I0322 14:15:15.122695 32228 exec_runner.go:39] Run: sudo kubeadm init phase certs all --config /var/lib/kubeadm.yaml 14:15:15 | > : Relaunching Kubernetes v1.13.4 using kubeadm ... 14:15:15 | ! I0322 14:15:15.288957 32228 exec_runner.go:39] Run: sudo kubeadm init phase kubeconfig all --config /var/lib/kubeadm.yaml 14:15:17 | ! I0322 14:15:17.388464 32228 exec_runner.go:39] Run: sudo kubeadm init phase control-plane all --config /var/lib/kubeadm.yaml 14:15:17 | ! I0322 14:15:17.450944 32228 exec_runner.go:39] Run: sudo kubeadm init phase etcd local --config /var/lib/kubeadm.yaml 14:15:17 | ! I0322 14:15:17.516410 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ... 14:15:23 | ! I0322 14:15:23.474127 32228 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver 14:15:23 | ! I0322 14:15:23.474255 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:15:23 | ! I0322 14:15:23.481347 32228 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:15:23 | ! I0322 14:15:23.481421 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ... 14:15:23 | ! I0322 14:15:23.485213 32228 kubernetes.go:134] Found 1 Pods for label selector component=etcd 14:15:23 | ! I0322 14:15:23.485269 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ... 14:15:23 | ! I0322 14:15:23.488405 32228 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler 14:15:23 | ! I0322 14:15:23.488455 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ... 14:15:23 | ! I0322 14:15:23.492831 32228 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager 14:15:23 | ! I0322 14:15:23.492882 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-addon-manager" ... 14:15:23 | ! I0322 14:15:23.508066 32228 kubernetes.go:134] Found 1 Pods for label selector component=kube-addon-manager 14:15:23 | ! I0322 14:15:23.508146 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ... 14:15:23 | ! I0322 14:15:23.526779 32228 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns 14:15:23 | > : Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns 14:15:23 | > : Updating kube-proxy configuration ... 14:15:23 | ! I0322 14:15:23.529227 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:15:23 | ! I0322 14:15:23.548539 32228 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:15:23 | ! I0322 14:15:23.554564 32228 util.go:174] kube-proxy config: apiVersion: v1 14:15:23 | ! kind: Config 14:15:23 | ! clusters: 14:15:23 | ! - cluster: 14:15:23 | ! certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 14:15:23 | ! server: https://localhost:8443 14:15:23 | ! name: default 14:15:23 | ! contexts: 14:15:23 | ! - context: 14:15:23 | ! cluster: default 14:15:23 | ! namespace: default 14:15:23 | ! user: default 14:15:23 | ! name: default 14:15:23 | ! current-context: default 14:15:23 | ! users: 14:15:23 | ! - name: default 14:15:23 | ! user: 14:15:23 | ! tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token 14:15:23 | ! I0322 14:15:23.555223 32228 util.go:194] updated kube-proxy config: apiVersion: v1 14:15:23 | ! kind: Config 14:15:23 | ! clusters: 14:15:23 | ! - cluster: 14:15:23 | ! certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 14:15:23 | ! server: https://10.128.0.3:8443 14:15:23 | ! name: default 14:15:23 | ! contexts: 14:15:23 | ! - context: 14:15:23 | ! cluster: default 14:15:23 | ! namespace: default 14:15:23 | ! user: default 14:15:23 | ! name: default 14:15:23 | ! current-context: default 14:15:23 | ! users: 14:15:23 | ! - name: default 14:15:23 | ! user: 14:15:23 | ! tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token 14:15:23 | ! I0322 14:15:23.595497 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:15:23 | ! I0322 14:15:23.598244 32228 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:15:23 | ! I0322 14:15:23.600373 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ... 14:15:23 | ! I0322 14:15:23.603297 32228 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver 14:15:23 | ! I0322 14:15:23.603327 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ... 14:15:23 | ! I0322 14:15:23.606321 32228 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy 14:15:23 | ! I0322 14:15:23.606351 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ... 14:15:23 | ! I0322 14:15:23.609075 32228 kubernetes.go:134] Found 1 Pods for label selector component=etcd 14:15:23 | ! I0322 14:15:23.609117 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ... 14:15:23 | ! I0322 14:15:23.611868 32228 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler 14:15:23 | ! I0322 14:15:23.611904 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ... 14:15:23 | ! I0322 14:15:23.615049 32228 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager 14:15:23 | ! I0322 14:15:23.615083 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-addon-manager" ... 14:15:23 | ! I0322 14:15:23.617741 32228 kubernetes.go:134] Found 1 Pods for label selector component=kube-addon-manager 14:15:23 | ! I0322 14:15:23.617767 32228 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ... 14:15:23 | ! I0322 14:15:23.620701 32228 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns 14:15:23 | ! I0322 14:15:23.620759 32228 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:15:23 | ! I0322 14:15:23.645631 32228 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:500 Internal Server Error StatusCode:500 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff] Date:[Fri, 22 Mar 2019 14:15:23 GMT] Content-Length:[816]] Body:0xc00067e240 ContentLength:816 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003a2e00 TLS:0xc0004be000} 14:15:23 | ! I0322 14:15:23.645876 32228 utils.go:125] error: Temporary Error: apiserver status=Error err= - sleeping 10s 14:15:33 | ! I0322 14:15:33.646083 32228 utils.go:114] retry loop 1 14:15:33 | ! I0322 14:15:33.652374 32228 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:15:33 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc000788040 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005dc100 TLS:0xc0005480b0} 14:15:33 | > - Verifying component health ...... 14:15:33 | > > Configuring local host environment ... 14:15:33 | > - https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md 14:15:33 | > - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME 14:15:33 | > - sudo chown -R $USER $HOME/.kube $HOME/.minikube 14:15:33 | > i This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true 14:15:33 | > + kubectl is now configured to use "minikube" 14:15:33 | > = Done! Thank you for using minikube! 14:15:33 | ! ! The 'none' driver provides limited isolation and may reduce system security and reliability. 14:15:33 | ! ! For more information, see: 14:15:33 | ! ! kubectl and minikube configuration will be stored in /home/jenkins 14:15:33 | ! ! To use kubectl or minikube commands as your own user, you may 14:15:33 | ! ! need to relocate them. For example, to overwrite your own settings: 14:15:33 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:15:33 | ! I0322 14:15:33.685059 1793 notify.go:126] Checking for updates... 14:15:33 | ! I0322 14:15:33.751630 1793 none.go:231] checking for running kubelet ... 14:15:33 | ! I0322 14:15:33.751652 1793 exec_runner.go:39] Run: systemctl is-active --quiet service kubelet 14:15:33 | ! I0322 14:15:33.757848 1793 exec_runner.go:50] Run with output: sudo systemctl is-active kubelet 14:15:33 | ! I0322 14:15:33.768425 1793 interface.go:360] Looking for default routes with IPv4 addresses 14:15:33 | ! I0322 14:15:33.768559 1793 interface.go:365] Default route transits interface "eth0" 14:15:33 | ! I0322 14:15:33.768851 1793 interface.go:174] Interface eth0 is up 14:15:33 | ! I0322 14:15:33.768913 1793 interface.go:222] Interface "eth0" has 1 addresses :[10.128.0.3/32]. 14:15:33 | ! I0322 14:15:33.768936 1793 interface.go:189] Checking addr 10.128.0.3/32. 14:15:33 | ! I0322 14:15:33.768944 1793 interface.go:196] IP found 10.128.0.3 14:15:33 | ! I0322 14:15:33.768953 1793 interface.go:228] Found valid IPv4 address 10.128.0.3 for interface "eth0". 14:15:33 | ! I0322 14:15:33.768960 1793 interface.go:371] Found active IP 10.128.0.3 14:15:33 | ! I0322 14:15:33.775125 1793 kubeadm.go:134] https://10.128.0.3:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Date:[Fri, 22 Mar 2019 14:15:33 GMT] Content-Length:[2] Content-Type:[text/plain; charset=utf-8]] Body:0xc00008a9c0 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003aef00 TLS:0xc0000c88f0} 14:15:33 | > Running 14:15:33 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 delete] 14:15:33 | > # Uninstalling Kubernetes v1.13.4 using kubeadm ... 14:15:39 | > x Deleting "minikube" from none ... 14:15:39 | > - The "minikube" cluster has been deleted. 14:15:39 | Run: [/home/jenkins/workspace/Linux_Integration_Tests_none/out/minikube-linux-amd64 status --format={{.Host}} --v=10 --logtostderr --bootstrapper=kubeadm] 14:15:39 | ! I0322 14:15:39.726913 3013 notify.go:126] Checking for updates... === RUN TestStartStop/containerd+cache === RUN TestStartStop/crio+cache --- PASS: TestStartStop (327.93s) --- PASS: TestStartStop/docker+cache (166.04s) --- PASS: TestStartStop/docker+cache+ignore_verifications (161.89s) --- SKIP: TestStartStop/containerd+cache (0.00s) start_stop_delete_test.go:46: skipping containerd+cache - incompatible with none driver --- SKIP: TestStartStop/crio+cache (0.00s) start_stop_delete_test.go:46: skipping crio+cache - incompatible with none driver FAIL ++ result=1 +++ date ++ echo '>> out/e2e-linux-amd64 exited with 1 at Fri Mar 22 14:15:39 UTC 2019' >> out/e2e-linux-amd64 exited with 1 at Fri Mar 22 14:15:39 UTC 2019 ++ echo '' ++ [[ 1 -eq 0 ]] ++ status=failure ++ echo 'minikube: FAIL' minikube: FAIL ++ source print-debug-info.sh +++ set +e +++ echo '' ++++ date +++ echo '>>> print-debug-info at Fri Mar 22 14:15:39 UTC 2019:' >>> print-debug-info at Fri Mar 22 14:15:39 UTC 2019: +++ echo '' +++ sudo -E cat /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/ca.crt server: https://10.128.0.3:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: client-certificate: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/client.crt client-key: /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube/client.key +++ kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} error: Error loading config file "/home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig": open /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig: permission denied +++ MINIKUBE='sudo -E out/minikube-linux-amd64' +++ sudo -E out/minikube-linux-amd64 status host: kubelet: apiserver: ! Error executing status template: template: status:3:13: executing "status" at <.ApiServer>: can't evaluate field ApiServer in type cmd.Status * Sorry that minikube crashed. If this was unexpected, we would love to hear from you: - https://github.com/kubernetes/minikube/issues/new +++ sudo -E out/minikube-linux-amd64 ip ! "minikube" host does not exist, unable to show an IP +++ [[ none == \n\o\n\e ]] +++ run= ++++ date +++ echo 'Local date: Fri Mar 22 14:15:40 UTC 2019' Local date: Fri Mar 22 14:15:40 UTC 2019 +++ date Fri Mar 22 14:15:40 UTC 2019 +++ uptime 14:15:40 up 9:57, 0 users, load average: 1.32, 1.13, 1.30 +++ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +++ env TERM=dumb systemctl list-units --state=failed 0 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'. +++ env TERM=dumb journalctl --no-tail --no-pager -p notice -- Logs begin at Fri 2019-03-22 12:13:56 UTC, end at Fri 2019-03-22 14:15:40 UTC. -- Mar 22 12:13:55 kvm-integration-slave systemd-udevd[23868]: Could not generate persistent MAC address for vethe2ab11f: No such file or directory Mar 22 12:13:55 kvm-integration-slave systemd-udevd[23869]: Could not generate persistent MAC address for veth151d009: No such file or directory Mar 22 12:13:56 kvm-integration-slave systemd-udevd[24130]: Could not generate persistent MAC address for veth92e4e55: No such file or directory Mar 22 12:13:56 kvm-integration-slave systemd-udevd[24131]: Could not generate persistent MAC address for vethcfb7644: No such file or directory Mar 22 12:13:56 kvm-integration-slave systemd-udevd[24130]: Could not generate persistent MAC address for veth7b0f079: No such file or directory Mar 22 12:13:56 kvm-integration-slave systemd-udevd[24146]: Could not generate persistent MAC address for veth7ef93ed: No such file or directory Mar 22 12:14:57 kvm-integration-slave sudo[25324]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 12:14:57 kvm-integration-slave sudo[25339]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 12:14:58 kvm-integration-slave sudo[25381]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 12:15:08 kvm-integration-slave sudo[26322]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 12:15:11 kvm-integration-slave sudo[26505]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 12:15:11 kvm-integration-slave sudo[26525]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 12:15:11 kvm-integration-slave sudo[26545]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 12:15:11 kvm-integration-slave sudo[26565]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 12:15:12 kvm-integration-slave sudo[26585]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 12:15:12 kvm-integration-slave sudo[26606]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 12:15:12 kvm-integration-slave sudo[26625]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 12:15:12 kvm-integration-slave sudo[26644]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 12:15:12 kvm-integration-slave sudo[26664]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 12:15:13 kvm-integration-slave sudo[26684]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 12:15:13 kvm-integration-slave sudo[26703]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 12:15:13 kvm-integration-slave sudo[26722]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 12:15:14 kvm-integration-slave sudo[26741]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 12:15:14 kvm-integration-slave sudo[26760]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 12:15:14 kvm-integration-slave sudo[26763]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 12:15:14 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 12:15:14 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:14 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:14 kvm-integration-slave sudo[26776]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 12:15:14 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:14 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 12:15:14 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:14 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:14 kvm-integration-slave sudo[26762]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 12:15:14 kvm-integration-slave sudo[26797]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 12:15:16 kvm-integration-slave sudo[27525]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase certs all --config /var/lib/kubeadm.yaml Mar 22 12:15:17 kvm-integration-slave sudo[27539]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase kubeconfig all --config /var/lib/kubeadm.yaml Mar 22 12:15:20 kvm-integration-slave sudo[27572]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase control-plane all --config /var/lib/kubeadm.yaml Mar 22 12:15:20 kvm-integration-slave sudo[27583]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase etcd local --config /var/lib/kubeadm.yaml Mar 22 12:15:25 kvm-integration-slave sudo[27595]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 12:15:26 kvm-integration-slave systemd-udevd[27623]: Could not generate persistent MAC address for vethce2963d: No such file or directory Mar 22 12:15:26 kvm-integration-slave systemd-udevd[27624]: Could not generate persistent MAC address for veth7e59a5a: No such file or directory Mar 22 12:15:27 kvm-integration-slave systemd-udevd[27891]: Could not generate persistent MAC address for vethb1ed52d: No such file or directory Mar 22 12:15:27 kvm-integration-slave systemd-udevd[27892]: Could not generate persistent MAC address for veth76a78e9: No such file or directory Mar 22 12:15:27 kvm-integration-slave systemd-udevd[27923]: Could not generate persistent MAC address for veth4e13c17: No such file or directory Mar 22 12:15:27 kvm-integration-slave systemd-udevd[27922]: Could not generate persistent MAC address for vethadb63fe: No such file or directory Mar 22 12:15:35 kvm-integration-slave sudo[28312]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 12:15:35 kvm-integration-slave sudo[28326]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset --force Mar 22 12:15:41 kvm-integration-slave sudo[29520]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 12:15:41 kvm-integration-slave sudo[29531]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /data/minikube /etc/kubernetes/manifests /var/lib/minikube Mar 22 12:15:41 kvm-integration-slave sudo[29594]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 12:15:43 kvm-integration-slave sudo[29771]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 12:15:43 kvm-integration-slave sudo[29790]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 12:15:44 kvm-integration-slave sudo[29812]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 12:15:44 kvm-integration-slave sudo[29831]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 12:15:44 kvm-integration-slave sudo[29850]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 12:15:44 kvm-integration-slave sudo[29869]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 12:15:44 kvm-integration-slave sudo[29888]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 12:15:45 kvm-integration-slave sudo[29907]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 12:15:45 kvm-integration-slave sudo[29926]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 12:15:45 kvm-integration-slave sudo[29945]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 12:15:45 kvm-integration-slave sudo[29965]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 12:15:45 kvm-integration-slave sudo[29984]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 12:15:46 kvm-integration-slave sudo[30003]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 12:15:46 kvm-integration-slave sudo[30022]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 12:15:46 kvm-integration-slave sudo[30025]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 12:15:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 12:15:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:46 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:46 kvm-integration-slave sudo[30038]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 12:15:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 12:15:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:46 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:46 kvm-integration-slave sudo[30024]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 12:15:46 kvm-integration-slave sudo[30059]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 12:15:47 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 12:15:47 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 12:15:47 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 12:15:48 kvm-integration-slave sudo[30142]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI Mar 22 12:15:48 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:48 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:48 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 12:15:48 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:48 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:15:49 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 12:15:49 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 12:15:49 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 12:16:27 kvm-integration-slave systemd-udevd[31430]: Could not generate persistent MAC address for veth24b640e: No such file or directory Mar 22 12:16:27 kvm-integration-slave systemd-udevd[31431]: Could not generate persistent MAC address for vethaf7f7c4: No such file or directory Mar 22 12:16:27 kvm-integration-slave systemd-udevd[31479]: Could not generate persistent MAC address for vetha693e87: No such file or directory Mar 22 12:16:27 kvm-integration-slave systemd-udevd[31478]: Could not generate persistent MAC address for veth7b16bda: No such file or directory Mar 22 12:16:28 kvm-integration-slave systemd-udevd[32004]: Could not generate persistent MAC address for veth2d6f4bc: No such file or directory Mar 22 12:16:28 kvm-integration-slave systemd-udevd[32005]: Could not generate persistent MAC address for veth0cf37e0: No such file or directory Mar 22 12:17:29 kvm-integration-slave sudo[707]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 12:17:29 kvm-integration-slave sudo[724]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 12:17:30 kvm-integration-slave sudo[752]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 12:17:40 kvm-integration-slave sudo[1760]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 12:17:43 kvm-integration-slave sudo[1935]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 12:17:43 kvm-integration-slave sudo[1954]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 12:17:43 kvm-integration-slave sudo[1973]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 12:17:43 kvm-integration-slave sudo[1992]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 12:17:43 kvm-integration-slave sudo[2013]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 12:17:44 kvm-integration-slave sudo[2032]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 12:17:44 kvm-integration-slave sudo[2052]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 12:17:44 kvm-integration-slave sudo[2071]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 12:17:44 kvm-integration-slave sudo[2090]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 12:17:44 kvm-integration-slave sudo[2108]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 12:17:45 kvm-integration-slave sudo[2128]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 12:17:45 kvm-integration-slave sudo[2147]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 12:17:45 kvm-integration-slave sudo[2167]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 12:17:45 kvm-integration-slave sudo[2187]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 12:17:46 kvm-integration-slave sudo[2190]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 12:17:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 12:17:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:17:46 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:17:46 kvm-integration-slave sudo[2203]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 12:17:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:17:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 12:17:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:17:46 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 12:17:46 kvm-integration-slave sudo[2189]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 12:17:46 kvm-integration-slave sudo[2225]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 12:17:47 kvm-integration-slave sudo[2953]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase certs all --config /var/lib/kubeadm.yaml Mar 22 12:17:48 kvm-integration-slave sudo[2969]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase kubeconfig all --config /var/lib/kubeadm.yaml Mar 22 12:17:50 kvm-integration-slave sudo[2998]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase control-plane all --config /var/lib/kubeadm.yaml Mar 22 12:17:50 kvm-integration-slave sudo[3008]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase etcd local --config /var/lib/kubeadm.yaml Mar 22 12:17:56 kvm-integration-slave sudo[3026]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 12:17:56 kvm-integration-slave systemd-udevd[3030]: Could not generate persistent MAC address for vethbd1b901: No such file or directory Mar 22 12:17:56 kvm-integration-slave systemd-udevd[3029]: Could not generate persistent MAC address for veth0812fe2: No such file or directory Mar 22 12:17:56 kvm-integration-slave systemd-udevd[3074]: Could not generate persistent MAC address for veth763e883: No such file or directory Mar 22 12:17:56 kvm-integration-slave systemd-udevd[3075]: Could not generate persistent MAC address for veth466553f: No such file or directory Mar 22 12:17:56 kvm-integration-slave systemd-udevd[3159]: Could not generate persistent MAC address for veth9f7fbb7: No such file or directory Mar 22 12:17:56 kvm-integration-slave systemd-udevd[3158]: Could not generate persistent MAC address for vetha6e840b: No such file or directory Mar 22 12:18:06 kvm-integration-slave sudo[3779]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 12:18:06 kvm-integration-slave sudo[3792]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset --force Mar 22 12:18:12 kvm-integration-slave sudo[4982]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 12:18:12 kvm-integration-slave sudo[4993]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /data/minikube /etc/kubernetes/manifests /var/lib/minikube Mar 22 12:18:12 kvm-integration-slave sudo[5010]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/cat /home/jenkins/minikube-integration/linux-amd64-none-3714-15210-481f9b55983b7ecbac8ebff2ffd88003f42f415d/kubeconfig Mar 22 12:18:12 kvm-integration-slave sudo[5020]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/minikube-linux-amd64 status Mar 22 12:18:13 kvm-integration-slave sudo[5031]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/minikube-linux-amd64 ip Mar 22 12:18:13 kvm-integration-slave sudo[5111]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/minikube-linux-amd64 tunnel --cleanup Mar 22 12:18:13 kvm-integration-slave sudo[5122]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/minikube-linux-amd64 delete Mar 22 12:18:13 kvm-integration-slave sudo[5140]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -Rf /home/jenkins/minikube-integration/linux-amd64-none-3714-15210-481f9b55983b7ecbac8ebff2ffd88003f42f415d/.minikube Mar 22 12:18:13 kvm-integration-slave sudo[5142]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -f /home/jenkins/minikube-integration/linux-amd64-none-3714-15210-481f9b55983b7ecbac8ebff2ffd88003f42f415d/kubeconfig Mar 22 12:18:17 kvm-integration-slave sudo[5862]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_KVM ; USER=root ; COMMAND=/usr/bin/gsutil cp gs://minikube-builds/kvm-driver/docker-machine-driver-kvm /usr/local/bin/docker-machine-driver-kvm Mar 22 12:18:19 kvm-integration-slave sudo[6063]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_KVM ; USER=root ; COMMAND=/bin/chmod +x /usr/local/bin/docker-machine-driver-kvm Mar 22 12:18:28 kvm-integration-slave systemd-udevd[6958]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 12:18:32 kvm-integration-slave kernel: kvm [7074]: vcpu0, guest rIP: 0xffffffffbb646066 unhandled rdmsr: 0x140 Mar 22 12:18:32 kvm-integration-slave kernel: kvm [7074]: vcpu0, guest rIP: 0xffffffffbb646066 unhandled rdmsr: 0x4e Mar 22 12:18:32 kvm-integration-slave kernel: kvm [7074]: vcpu1, guest rIP: 0xffffffffbb646066 unhandled rdmsr: 0x140 Mar 22 12:18:32 kvm-integration-slave kernel: kvm [7074]: vcpu1, guest rIP: 0xffffffffbb646066 unhandled rdmsr: 0x4e Mar 22 12:19:10 kvm-integration-slave kernel: kvm [7074]: vcpu0, guest rIP: 0xffffffffbb646066 unhandled rdmsr: 0x34 Mar 22 12:19:10 kvm-integration-slave kernel: kvm [7074]: vcpu0, guest rIP: 0xffffffffbb646066 unhandled rdmsr: 0x606 Mar 22 12:22:29 kvm-integration-slave sudo[7379]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_KVM ; USER=root ; COMMAND=/sbin/route Mar 22 12:22:35 kvm-integration-slave sudo[7421]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_KVM ; USER=root ; COMMAND=/sbin/ip route add 10.96.0.0/12 via 192.168.39.242 Mar 22 12:25:05 kvm-integration-slave systemd-udevd[8443]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 12:25:08 kvm-integration-slave kernel: kvm [8559]: vcpu0, guest rIP: 0xffffffffb8846066 unhandled rdmsr: 0x140 Mar 22 12:25:08 kvm-integration-slave kernel: kvm [8559]: vcpu0, guest rIP: 0xffffffffb8846066 unhandled rdmsr: 0x4e Mar 22 12:25:08 kvm-integration-slave kernel: kvm [8559]: vcpu1, guest rIP: 0xffffffffb8846066 unhandled rdmsr: 0x140 Mar 22 12:25:08 kvm-integration-slave kernel: kvm [8559]: vcpu1, guest rIP: 0xffffffffb8846066 unhandled rdmsr: 0x4e Mar 22 12:25:09 kvm-integration-slave sudo[8577]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_KVM ; USER=root ; COMMAND=/sbin/ip route add 10.96.0.0/12 via 192.168.39.242 Mar 22 12:25:47 kvm-integration-slave kernel: kvm [8559]: vcpu0, guest rIP: 0xffffffffb8846066 unhandled rdmsr: 0x34 Mar 22 12:25:47 kvm-integration-slave kernel: kvm [8559]: vcpu0, guest rIP: 0xffffffffb8846066 unhandled rdmsr: 0x606 Mar 22 12:31:52 kvm-integration-slave systemd-udevd[9726]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 12:31:56 kvm-integration-slave kernel: kvm [9842]: vcpu0, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x140 Mar 22 12:31:56 kvm-integration-slave kernel: kvm [9842]: vcpu0, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x4e Mar 22 12:31:56 kvm-integration-slave kernel: kvm [9842]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x140 Mar 22 12:31:56 kvm-integration-slave kernel: kvm [9842]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x4e Mar 22 12:32:32 kvm-integration-slave kernel: kvm [9842]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x34 Mar 22 12:32:32 kvm-integration-slave kernel: kvm [9842]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x606 Mar 22 12:36:28 kvm-integration-slave systemd-udevd[10084]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 12:36:31 kvm-integration-slave kernel: kvm [10201]: vcpu0, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x140 Mar 22 12:36:31 kvm-integration-slave kernel: kvm [10201]: vcpu0, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x4e Mar 22 12:36:31 kvm-integration-slave kernel: kvm [10201]: vcpu1, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x140 Mar 22 12:36:31 kvm-integration-slave kernel: kvm [10201]: vcpu1, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x4e Mar 22 12:37:12 kvm-integration-slave kernel: kvm [10201]: vcpu1, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x34 Mar 22 12:37:12 kvm-integration-slave kernel: kvm [10201]: vcpu1, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x606 Mar 22 12:41:11 kvm-integration-slave kernel: kvm [10501]: vcpu0, guest rIP: 0xffffffff8e046066 unhandled rdmsr: 0x140 Mar 22 12:41:11 kvm-integration-slave kernel: kvm [10501]: vcpu0, guest rIP: 0xffffffff8e046066 unhandled rdmsr: 0x4e Mar 22 12:41:11 kvm-integration-slave kernel: kvm [10501]: vcpu1, guest rIP: 0xffffffff8e046066 unhandled rdmsr: 0x140 Mar 22 12:41:11 kvm-integration-slave kernel: kvm [10501]: vcpu1, guest rIP: 0xffffffff8e046066 unhandled rdmsr: 0x4e Mar 22 12:41:53 kvm-integration-slave kernel: kvm [10501]: vcpu1, guest rIP: 0xffffffff8e046066 unhandled rdmsr: 0x34 Mar 22 12:41:53 kvm-integration-slave kernel: kvm [10501]: vcpu1, guest rIP: 0xffffffff8e046066 unhandled rdmsr: 0x606 Mar 22 12:43:20 kvm-integration-slave systemd-udevd[10700]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 12:43:24 kvm-integration-slave kernel: kvm [10819]: vcpu0, guest rIP: 0xffffffff81646066 unhandled rdmsr: 0x140 Mar 22 12:43:24 kvm-integration-slave kernel: kvm [10819]: vcpu0, guest rIP: 0xffffffff81646066 unhandled rdmsr: 0x4e Mar 22 12:43:24 kvm-integration-slave kernel: kvm [10819]: vcpu1, guest rIP: 0xffffffff81646066 unhandled rdmsr: 0x140 Mar 22 12:43:24 kvm-integration-slave kernel: kvm [10819]: vcpu1, guest rIP: 0xffffffff81646066 unhandled rdmsr: 0x4e Mar 22 12:44:00 kvm-integration-slave kernel: kvm [10819]: vcpu1, guest rIP: 0xffffffff81646066 unhandled rdmsr: 0x34 Mar 22 12:44:00 kvm-integration-slave kernel: kvm [10819]: vcpu1, guest rIP: 0xffffffff81646066 unhandled rdmsr: 0x606 Mar 22 12:47:40 kvm-integration-slave kernel: kvm [11104]: vcpu0, guest rIP: 0xffffffffbc846066 unhandled rdmsr: 0x140 Mar 22 12:47:40 kvm-integration-slave kernel: kvm [11104]: vcpu0, guest rIP: 0xffffffffbc846066 unhandled rdmsr: 0x4e Mar 22 12:47:40 kvm-integration-slave kernel: kvm [11104]: vcpu1, guest rIP: 0xffffffffbc846066 unhandled rdmsr: 0x140 Mar 22 12:47:40 kvm-integration-slave kernel: kvm [11104]: vcpu1, guest rIP: 0xffffffffbc846066 unhandled rdmsr: 0x4e Mar 22 12:48:16 kvm-integration-slave kernel: kvm [11104]: vcpu1, guest rIP: 0xffffffffbc846066 unhandled rdmsr: 0x34 Mar 22 12:48:16 kvm-integration-slave kernel: kvm [11104]: vcpu1, guest rIP: 0xffffffffbc846066 unhandled rdmsr: 0x606 Mar 22 12:49:42 kvm-integration-slave systemd-udevd[11317]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 12:49:45 kvm-integration-slave kernel: kvm [11434]: vcpu0, guest rIP: 0xffffffffb6246066 unhandled rdmsr: 0x140 Mar 22 12:49:45 kvm-integration-slave kernel: kvm [11434]: vcpu0, guest rIP: 0xffffffffb6246066 unhandled rdmsr: 0x4e Mar 22 12:49:45 kvm-integration-slave kernel: kvm [11434]: vcpu1, guest rIP: 0xffffffffb6246066 unhandled rdmsr: 0x140 Mar 22 12:49:45 kvm-integration-slave kernel: kvm [11434]: vcpu1, guest rIP: 0xffffffffb6246066 unhandled rdmsr: 0x4e Mar 22 12:50:21 kvm-integration-slave kernel: kvm [11434]: vcpu1, guest rIP: 0xffffffffb6246066 unhandled rdmsr: 0x34 Mar 22 12:50:21 kvm-integration-slave kernel: kvm [11434]: vcpu1, guest rIP: 0xffffffffb6246066 unhandled rdmsr: 0x606 Mar 22 12:53:53 kvm-integration-slave kernel: kvm [11745]: vcpu0, guest rIP: 0xffffffffb0446066 unhandled rdmsr: 0x140 Mar 22 12:53:53 kvm-integration-slave kernel: kvm [11745]: vcpu0, guest rIP: 0xffffffffb0446066 unhandled rdmsr: 0x4e Mar 22 12:53:53 kvm-integration-slave kernel: kvm [11745]: vcpu1, guest rIP: 0xffffffffb0446066 unhandled rdmsr: 0x140 Mar 22 12:53:53 kvm-integration-slave kernel: kvm [11745]: vcpu1, guest rIP: 0xffffffffb0446066 unhandled rdmsr: 0x4e Mar 22 12:54:31 kvm-integration-slave kernel: kvm [11745]: vcpu0, guest rIP: 0xffffffffb0446066 unhandled rdmsr: 0x34 Mar 22 12:54:31 kvm-integration-slave kernel: kvm [11745]: vcpu0, guest rIP: 0xffffffffb0446066 unhandled rdmsr: 0x606 Mar 22 12:55:53 kvm-integration-slave systemd-udevd[11958]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 12:55:56 kvm-integration-slave kernel: kvm [12074]: vcpu0, guest rIP: 0xffffffffa0e46066 unhandled rdmsr: 0x140 Mar 22 12:55:56 kvm-integration-slave kernel: kvm [12074]: vcpu0, guest rIP: 0xffffffffa0e46066 unhandled rdmsr: 0x4e Mar 22 12:55:57 kvm-integration-slave kernel: kvm [12074]: vcpu1, guest rIP: 0xffffffffa0e46066 unhandled rdmsr: 0x140 Mar 22 12:55:57 kvm-integration-slave kernel: kvm [12074]: vcpu1, guest rIP: 0xffffffffa0e46066 unhandled rdmsr: 0x4e Mar 22 12:56:32 kvm-integration-slave kernel: kvm [12074]: vcpu0, guest rIP: 0xffffffffa0e46066 unhandled rdmsr: 0x34 Mar 22 12:56:32 kvm-integration-slave kernel: kvm [12074]: vcpu0, guest rIP: 0xffffffffa0e46066 unhandled rdmsr: 0x606 Mar 22 13:01:42 kvm-integration-slave kernel: kvm [12427]: vcpu0, guest rIP: 0xffffffffb3c46066 unhandled rdmsr: 0x140 Mar 22 13:01:42 kvm-integration-slave kernel: kvm [12427]: vcpu0, guest rIP: 0xffffffffb3c46066 unhandled rdmsr: 0x4e Mar 22 13:01:42 kvm-integration-slave kernel: kvm [12427]: vcpu1, guest rIP: 0xffffffffb3c46066 unhandled rdmsr: 0x140 Mar 22 13:01:42 kvm-integration-slave kernel: kvm [12427]: vcpu1, guest rIP: 0xffffffffb3c46066 unhandled rdmsr: 0x4e Mar 22 13:02:20 kvm-integration-slave kernel: kvm [12427]: vcpu0, guest rIP: 0xffffffffb3c46066 unhandled rdmsr: 0x34 Mar 22 13:02:20 kvm-integration-slave kernel: kvm [12427]: vcpu0, guest rIP: 0xffffffffb3c46066 unhandled rdmsr: 0x606 Mar 22 13:04:03 kvm-integration-slave systemd-udevd[12632]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 13:04:06 kvm-integration-slave kernel: kvm [12747]: vcpu0, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x140 Mar 22 13:04:06 kvm-integration-slave kernel: kvm [12747]: vcpu0, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x4e Mar 22 13:04:07 kvm-integration-slave kernel: kvm [12747]: vcpu1, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x140 Mar 22 13:04:07 kvm-integration-slave kernel: kvm [12747]: vcpu1, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x4e Mar 22 13:04:45 kvm-integration-slave kernel: kvm [12747]: vcpu1, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x34 Mar 22 13:04:45 kvm-integration-slave kernel: kvm [12747]: vcpu1, guest rIP: 0xffffffffab846066 unhandled rdmsr: 0x606 Mar 22 13:08:41 kvm-integration-slave sudo[14114]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset Mar 22 13:08:41 kvm-integration-slave sudo[14123]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset -f Mar 22 13:08:41 kvm-integration-slave sudo[14145]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /data/* Mar 22 13:08:41 kvm-integration-slave sudo[14147]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /etc/kubernetes/addons Mar 22 13:08:41 kvm-integration-slave sudo[14149]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /var/lib/minikube/* Mar 22 13:08:46 kvm-integration-slave sudo[14979]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/kill 8047 20528 Mar 22 13:08:46 kvm-integration-slave sudo[15000]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/e2e-linux-amd64 -minikube-start-args=--vm-driver=none -minikube-args=--v=10 --logtostderr --bootstrapper=kubeadm -test.v -test.timeout=50m -binary=out/minikube-linux-amd64 Mar 22 13:08:47 kvm-integration-slave sudo[15040]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 13:08:49 kvm-integration-slave sudo[15213]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 13:08:49 kvm-integration-slave sudo[15234]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 13:08:49 kvm-integration-slave sudo[15252]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 13:08:50 kvm-integration-slave sudo[15271]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 13:08:50 kvm-integration-slave sudo[15289]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 13:08:50 kvm-integration-slave sudo[15309]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 13:08:50 kvm-integration-slave sudo[15328]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 13:08:50 kvm-integration-slave sudo[15347]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 13:08:50 kvm-integration-slave sudo[15368]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 13:08:51 kvm-integration-slave sudo[15387]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 13:08:51 kvm-integration-slave sudo[15406]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 13:08:51 kvm-integration-slave sudo[15425]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 13:08:51 kvm-integration-slave sudo[15444]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 13:08:52 kvm-integration-slave sudo[15463]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 13:08:52 kvm-integration-slave sudo[15467]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 13:08:53 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:08:53 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:08:53 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:08:53 kvm-integration-slave sudo[15480]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 13:08:53 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:08:53 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:08:53 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:08:53 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:08:53 kvm-integration-slave sudo[15466]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 13:08:53 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 13:08:53 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 13:08:53 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 13:08:54 kvm-integration-slave sudo[15507]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 13:08:55 kvm-integration-slave sudo[15585]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI Mar 22 13:08:55 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:08:55 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:08:56 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:08:56 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:08:56 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:08:56 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 13:08:56 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 13:08:56 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 13:09:31 kvm-integration-slave systemd-udevd[16746]: Could not generate persistent MAC address for veth555d49f: No such file or directory Mar 22 13:09:31 kvm-integration-slave systemd-udevd[16748]: Could not generate persistent MAC address for veth574c151: No such file or directory Mar 22 13:09:32 kvm-integration-slave systemd-udevd[16790]: Could not generate persistent MAC address for veth2b87990: No such file or directory Mar 22 13:09:32 kvm-integration-slave systemd-udevd[16791]: Could not generate persistent MAC address for vethf351ba2: No such file or directory Mar 22 13:10:37 kvm-integration-slave sudo[18017]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:10:37 kvm-integration-slave sudo[18034]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:10:37 kvm-integration-slave sudo[18046]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/sbin/route Mar 22 13:10:38 kvm-integration-slave systemd-udevd[18107]: Could not generate persistent MAC address for vethe5b6a49: No such file or directory Mar 22 13:10:38 kvm-integration-slave systemd-udevd[18108]: Could not generate persistent MAC address for veth66b5697: No such file or directory Mar 22 13:10:41 kvm-integration-slave sudo[18357]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:10:42 kvm-integration-slave sudo[18383]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:10:42 kvm-integration-slave systemd-udevd[18395]: Could not generate persistent MAC address for veth2ff486e: No such file or directory Mar 22 13:10:42 kvm-integration-slave systemd-udevd[18396]: Could not generate persistent MAC address for veth0b381b4: No such file or directory Mar 22 13:10:42 kvm-integration-slave sudo[18559]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/dmesg -PH -L=never --level warn,err,crit,alert,emerg Mar 22 13:10:42 kvm-integration-slave sudo[18587]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/sbin/ip route add 10.96.0.0/12 via 10.128.0.3 Mar 22 13:11:33 kvm-integration-slave systemd-udevd[19373]: Could not generate persistent MAC address for vethab9da23: No such file or directory Mar 22 13:11:33 kvm-integration-slave systemd-udevd[19372]: Could not generate persistent MAC address for veth687af1a: No such file or directory Mar 22 13:11:35 kvm-integration-slave sudo[19606]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset --force Mar 22 13:11:37 kvm-integration-slave sudo[20130]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/sbin/ip route delete 10.96.0.0/12 Mar 22 13:11:41 kvm-integration-slave sudo[20899]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 13:11:41 kvm-integration-slave sudo[20910]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /data/minikube /etc/kubernetes/manifests /var/lib/minikube Mar 22 13:11:41 kvm-integration-slave sudo[20939]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 13:11:44 kvm-integration-slave sudo[21111]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 13:11:44 kvm-integration-slave sudo[21130]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 13:11:44 kvm-integration-slave sudo[21150]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 13:11:44 kvm-integration-slave sudo[21171]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 13:11:44 kvm-integration-slave sudo[21190]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 13:11:45 kvm-integration-slave sudo[21208]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 13:11:45 kvm-integration-slave sudo[21227]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 13:11:45 kvm-integration-slave sudo[21246]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 13:11:45 kvm-integration-slave sudo[21265]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 13:11:45 kvm-integration-slave sudo[21284]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 13:11:46 kvm-integration-slave sudo[21303]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 13:11:46 kvm-integration-slave sudo[21322]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 13:11:46 kvm-integration-slave sudo[21341]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 13:11:46 kvm-integration-slave sudo[21361]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 13:11:46 kvm-integration-slave sudo[21365]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 13:11:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:11:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:11:46 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:11:46 kvm-integration-slave sudo[21378]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 13:11:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:11:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:11:46 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:11:46 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:11:47 kvm-integration-slave sudo[21364]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 13:11:47 kvm-integration-slave sudo[21399]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 13:11:47 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 13:11:47 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 13:11:47 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 13:11:48 kvm-integration-slave sudo[21485]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI Mar 22 13:11:48 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:11:48 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:11:49 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:11:49 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:11:49 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:11:49 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 13:11:49 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 13:11:49 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 13:12:27 kvm-integration-slave systemd-udevd[22673]: Could not generate persistent MAC address for veth136b866: No such file or directory Mar 22 13:12:27 kvm-integration-slave systemd-udevd[22674]: Could not generate persistent MAC address for veth4faba53: No such file or directory Mar 22 13:12:27 kvm-integration-slave systemd-udevd[22709]: Could not generate persistent MAC address for vethd7bab01: No such file or directory Mar 22 13:12:27 kvm-integration-slave systemd-udevd[22710]: Could not generate persistent MAC address for vethf19b6c6: No such file or directory Mar 22 13:12:29 kvm-integration-slave systemd-udevd[23372]: Could not generate persistent MAC address for vethe5322de: No such file or directory Mar 22 13:12:29 kvm-integration-slave systemd-udevd[23371]: Could not generate persistent MAC address for veth50a8a51: No such file or directory Mar 22 13:13:39 kvm-integration-slave sudo[24595]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:13:39 kvm-integration-slave sudo[24612]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:13:40 kvm-integration-slave sudo[24640]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 13:13:41 kvm-integration-slave systemd-udevd[25410]: link_config: could not get ethtool features for veth136b866 Mar 22 13:13:41 kvm-integration-slave systemd-udevd[25410]: Could not set offload features of veth136b866: No such device Mar 22 13:13:50 kvm-integration-slave sudo[25610]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 13:13:53 kvm-integration-slave sudo[25787]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 13:13:53 kvm-integration-slave sudo[25806]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 13:13:53 kvm-integration-slave sudo[25825]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 13:13:53 kvm-integration-slave sudo[25844]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 13:13:53 kvm-integration-slave sudo[25864]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 13:13:53 kvm-integration-slave sudo[25883]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 13:13:54 kvm-integration-slave sudo[25902]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 13:13:54 kvm-integration-slave sudo[25921]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 13:13:54 kvm-integration-slave sudo[25940]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 13:13:54 kvm-integration-slave sudo[25959]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 13:13:55 kvm-integration-slave sudo[25978]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 13:13:55 kvm-integration-slave sudo[25997]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 13:13:55 kvm-integration-slave sudo[26015]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 13:13:55 kvm-integration-slave sudo[26034]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 13:13:55 kvm-integration-slave sudo[26037]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 13:13:55 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:13:55 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:13:55 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:13:55 kvm-integration-slave sudo[26050]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 13:13:55 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:13:56 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:13:56 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:13:56 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:13:56 kvm-integration-slave sudo[26036]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 13:13:56 kvm-integration-slave sudo[26071]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 13:13:57 kvm-integration-slave sudo[26781]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase certs all --config /var/lib/kubeadm.yaml Mar 22 13:13:58 kvm-integration-slave sudo[26801]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase kubeconfig all --config /var/lib/kubeadm.yaml Mar 22 13:13:59 kvm-integration-slave sudo[26817]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase control-plane all --config /var/lib/kubeadm.yaml Mar 22 13:13:59 kvm-integration-slave sudo[26829]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase etcd local --config /var/lib/kubeadm.yaml Mar 22 13:14:05 kvm-integration-slave sudo[26842]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:14:06 kvm-integration-slave systemd-udevd[26860]: Could not generate persistent MAC address for veth42f2840: No such file or directory Mar 22 13:14:06 kvm-integration-slave systemd-udevd[26859]: Could not generate persistent MAC address for veth7901682: No such file or directory Mar 22 13:14:06 kvm-integration-slave systemd-udevd[26873]: Could not generate persistent MAC address for veth98b9d41: No such file or directory Mar 22 13:14:06 kvm-integration-slave systemd-udevd[26872]: Could not generate persistent MAC address for vethdf6201a: No such file or directory Mar 22 13:14:06 kvm-integration-slave systemd-udevd[26978]: Could not generate persistent MAC address for veth25fa92d: No such file or directory Mar 22 13:14:06 kvm-integration-slave systemd-udevd[26977]: Could not generate persistent MAC address for veth6ce2258: No such file or directory Mar 22 13:14:16 kvm-integration-slave sudo[27548]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:14:16 kvm-integration-slave sudo[27561]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset --force Mar 22 13:14:22 kvm-integration-slave sudo[28763]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 13:14:22 kvm-integration-slave sudo[28774]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /data/minikube /etc/kubernetes/manifests /var/lib/minikube Mar 22 13:14:23 kvm-integration-slave sudo[28840]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 13:14:25 kvm-integration-slave sudo[29016]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 13:14:25 kvm-integration-slave sudo[29035]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 13:14:25 kvm-integration-slave sudo[29054]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 13:14:25 kvm-integration-slave sudo[29072]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 13:14:25 kvm-integration-slave sudo[29091]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 13:14:26 kvm-integration-slave sudo[29110]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 13:14:26 kvm-integration-slave sudo[29130]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 13:14:26 kvm-integration-slave sudo[29148]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 13:14:26 kvm-integration-slave sudo[29168]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 13:14:26 kvm-integration-slave sudo[29187]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 13:14:27 kvm-integration-slave sudo[29206]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 13:14:27 kvm-integration-slave sudo[29224]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 13:14:28 kvm-integration-slave sudo[29243]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 13:14:28 kvm-integration-slave sudo[29263]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 13:14:29 kvm-integration-slave sudo[29266]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 13:14:29 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:14:29 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:14:29 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:14:29 kvm-integration-slave sudo[29279]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 13:14:29 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:14:29 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:14:29 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:14:29 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:14:29 kvm-integration-slave sudo[29265]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 13:14:29 kvm-integration-slave sudo[29300]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 13:14:30 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 13:14:30 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 13:14:30 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 13:14:30 kvm-integration-slave sudo[29381]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI Mar 22 13:14:30 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:14:30 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:14:31 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:14:31 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:14:31 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:14:31 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 13:14:31 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 13:14:31 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 13:15:08 kvm-integration-slave systemd-udevd[30556]: Could not generate persistent MAC address for veth47f224f: No such file or directory Mar 22 13:15:08 kvm-integration-slave systemd-udevd[30557]: Could not generate persistent MAC address for veth896a6b4: No such file or directory Mar 22 13:15:08 kvm-integration-slave systemd-udevd[30594]: Could not generate persistent MAC address for veth46d49f0: No such file or directory Mar 22 13:15:08 kvm-integration-slave systemd-udevd[30597]: Could not generate persistent MAC address for veth8a1937e: No such file or directory Mar 22 13:15:10 kvm-integration-slave systemd-udevd[31151]: Could not generate persistent MAC address for veth3697dd3: No such file or directory Mar 22 13:15:10 kvm-integration-slave systemd-udevd[31152]: Could not generate persistent MAC address for veth695622c: No such file or directory Mar 22 13:16:12 kvm-integration-slave sudo[32276]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:16:12 kvm-integration-slave sudo[32292]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:16:12 kvm-integration-slave sudo[32317]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 13:16:23 kvm-integration-slave sudo[877]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 13:16:25 kvm-integration-slave sudo[1063]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 13:16:25 kvm-integration-slave sudo[1082]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 13:16:25 kvm-integration-slave sudo[1100]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 13:16:26 kvm-integration-slave sudo[1119]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 13:16:26 kvm-integration-slave sudo[1138]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 13:16:26 kvm-integration-slave sudo[1160]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 13:16:26 kvm-integration-slave sudo[1179]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 13:16:27 kvm-integration-slave sudo[1198]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 13:16:27 kvm-integration-slave sudo[1222]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 13:16:27 kvm-integration-slave sudo[1242]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 13:16:27 kvm-integration-slave sudo[1262]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 13:16:27 kvm-integration-slave sudo[1281]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 13:16:28 kvm-integration-slave sudo[1300]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 13:16:28 kvm-integration-slave sudo[1319]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 13:16:28 kvm-integration-slave sudo[1322]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 13:16:28 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:16:28 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:16:28 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:16:28 kvm-integration-slave sudo[1335]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 13:16:28 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:16:28 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 13:16:28 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:16:28 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 13:16:28 kvm-integration-slave sudo[1321]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 13:16:28 kvm-integration-slave sudo[1356]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 13:16:30 kvm-integration-slave sudo[1995]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase certs all --config /var/lib/kubeadm.yaml Mar 22 13:16:30 kvm-integration-slave sudo[2073]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase kubeconfig all --config /var/lib/kubeadm.yaml Mar 22 13:16:32 kvm-integration-slave sudo[2106]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase control-plane all --config /var/lib/kubeadm.yaml Mar 22 13:16:32 kvm-integration-slave sudo[2117]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase etcd local --config /var/lib/kubeadm.yaml Mar 22 13:16:38 kvm-integration-slave sudo[2135]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:16:40 kvm-integration-slave systemd-udevd[2164]: Could not generate persistent MAC address for veth35d80a0: No such file or directory Mar 22 13:16:40 kvm-integration-slave systemd-udevd[2165]: Could not generate persistent MAC address for vethec20060: No such file or directory Mar 22 13:16:41 kvm-integration-slave systemd-udevd[2317]: Could not generate persistent MAC address for vetha4c1906: No such file or directory Mar 22 13:16:41 kvm-integration-slave systemd-udevd[2318]: Could not generate persistent MAC address for veth363e796: No such file or directory Mar 22 13:16:41 kvm-integration-slave systemd-udevd[2317]: Could not generate persistent MAC address for vethe760bd2: No such file or directory Mar 22 13:16:41 kvm-integration-slave systemd-udevd[2334]: Could not generate persistent MAC address for veth119fbeb: No such file or directory Mar 22 13:16:48 kvm-integration-slave sudo[2790]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 13:16:48 kvm-integration-slave sudo[2804]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset --force Mar 22 13:16:55 kvm-integration-slave sudo[3993]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 13:16:55 kvm-integration-slave sudo[4004]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /data/minikube /etc/kubernetes/manifests /var/lib/minikube Mar 22 13:16:55 kvm-integration-slave sudo[4021]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/cat /home/jenkins/minikube-integration/linux-amd64-none-master-14113-cbac94a53d8e3df959b9e22ef1a20a132d5f9dd1/kubeconfig Mar 22 13:16:55 kvm-integration-slave sudo[4031]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/minikube-linux-amd64 status Mar 22 13:16:55 kvm-integration-slave sudo[4043]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/minikube-linux-amd64 ip Mar 22 13:16:55 kvm-integration-slave sudo[4122]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/minikube-linux-amd64 tunnel --cleanup Mar 22 13:16:56 kvm-integration-slave sudo[4134]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/minikube-linux-amd64 delete Mar 22 13:16:56 kvm-integration-slave sudo[4153]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -Rf /home/jenkins/minikube-integration/linux-amd64-none-master-14113-cbac94a53d8e3df959b9e22ef1a20a132d5f9dd1/.minikube Mar 22 13:16:56 kvm-integration-slave sudo[4155]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -f /home/jenkins/minikube-integration/linux-amd64-none-master-14113-cbac94a53d8e3df959b9e22ef1a20a132d5f9dd1/kubeconfig Mar 22 13:17:00 kvm-integration-slave sudo[4873]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_KVM ; USER=root ; COMMAND=/usr/bin/gsutil cp gs://minikube-builds/kvm-driver/docker-machine-driver-kvm /usr/local/bin/docker-machine-driver-kvm Mar 22 13:17:01 kvm-integration-slave sudo[5053]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_KVM ; USER=root ; COMMAND=/bin/chmod +x /usr/local/bin/docker-machine-driver-kvm Mar 22 13:17:01 kvm-integration-slave root[5086]: cleanup-and-reboot running - may shutdown in 60 seconds Mar 22 13:17:10 kvm-integration-slave systemd-udevd[5959]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 13:17:14 kvm-integration-slave kernel: kvm [6094]: vcpu0, guest rIP: 0xffffffff82046066 unhandled rdmsr: 0x140 Mar 22 13:17:14 kvm-integration-slave kernel: kvm [6094]: vcpu0, guest rIP: 0xffffffff82046066 unhandled rdmsr: 0x4e Mar 22 13:17:14 kvm-integration-slave kernel: kvm [6094]: vcpu1, guest rIP: 0xffffffff82046066 unhandled rdmsr: 0x140 Mar 22 13:17:14 kvm-integration-slave kernel: kvm [6094]: vcpu1, guest rIP: 0xffffffff82046066 unhandled rdmsr: 0x4e Mar 22 13:17:58 kvm-integration-slave kernel: kvm [6094]: vcpu0, guest rIP: 0xffffffff82046066 unhandled rdmsr: 0x34 Mar 22 13:17:58 kvm-integration-slave kernel: kvm [6094]: vcpu0, guest rIP: 0xffffffff82046066 unhandled rdmsr: 0x606 Mar 22 13:21:09 kvm-integration-slave sudo[6435]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_KVM ; USER=root ; COMMAND=/sbin/route Mar 22 13:21:14 kvm-integration-slave sudo[6481]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_KVM ; USER=root ; COMMAND=/sbin/ip route add 10.96.0.0/12 via 192.168.39.50 Mar 22 13:23:55 kvm-integration-slave systemd-udevd[7538]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 13:23:58 kvm-integration-slave sudo[7671]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_KVM ; USER=root ; COMMAND=/sbin/ip route add 10.96.0.0/12 via 192.168.39.50 Mar 22 13:23:59 kvm-integration-slave kernel: kvm [7653]: vcpu0, guest rIP: 0xffffffff81c46066 unhandled rdmsr: 0x140 Mar 22 13:23:59 kvm-integration-slave kernel: kvm [7653]: vcpu0, guest rIP: 0xffffffff81c46066 unhandled rdmsr: 0x4e Mar 22 13:23:59 kvm-integration-slave kernel: kvm [7653]: vcpu1, guest rIP: 0xffffffff81c46066 unhandled rdmsr: 0x140 Mar 22 13:23:59 kvm-integration-slave kernel: kvm [7653]: vcpu1, guest rIP: 0xffffffff81c46066 unhandled rdmsr: 0x4e Mar 22 13:24:39 kvm-integration-slave kernel: kvm [7653]: vcpu1, guest rIP: 0xffffffff81c46066 unhandled rdmsr: 0x34 Mar 22 13:24:39 kvm-integration-slave kernel: kvm [7653]: vcpu1, guest rIP: 0xffffffff81c46066 unhandled rdmsr: 0x606 Mar 22 13:30:46 kvm-integration-slave systemd-udevd[8791]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 13:30:49 kvm-integration-slave kernel: kvm [8917]: vcpu0, guest rIP: 0xffffffffb3046066 unhandled rdmsr: 0x140 Mar 22 13:30:49 kvm-integration-slave kernel: kvm [8917]: vcpu0, guest rIP: 0xffffffffb3046066 unhandled rdmsr: 0x4e Mar 22 13:30:49 kvm-integration-slave kernel: kvm [8917]: vcpu1, guest rIP: 0xffffffffb3046066 unhandled rdmsr: 0x140 Mar 22 13:30:49 kvm-integration-slave kernel: kvm [8917]: vcpu1, guest rIP: 0xffffffffb3046066 unhandled rdmsr: 0x4e Mar 22 13:31:27 kvm-integration-slave kernel: kvm [8917]: vcpu1, guest rIP: 0xffffffffb3046066 unhandled rdmsr: 0x34 Mar 22 13:31:27 kvm-integration-slave kernel: kvm [8917]: vcpu1, guest rIP: 0xffffffffb3046066 unhandled rdmsr: 0x606 Mar 22 13:35:36 kvm-integration-slave systemd-udevd[9191]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 13:35:40 kvm-integration-slave kernel: kvm [9306]: vcpu0, guest rIP: 0xffffffff8ce46066 unhandled rdmsr: 0x140 Mar 22 13:35:40 kvm-integration-slave kernel: kvm [9306]: vcpu0, guest rIP: 0xffffffff8ce46066 unhandled rdmsr: 0x4e Mar 22 13:35:40 kvm-integration-slave kernel: kvm [9306]: vcpu1, guest rIP: 0xffffffff8ce46066 unhandled rdmsr: 0x140 Mar 22 13:35:40 kvm-integration-slave kernel: kvm [9306]: vcpu1, guest rIP: 0xffffffff8ce46066 unhandled rdmsr: 0x4e Mar 22 13:36:23 kvm-integration-slave kernel: kvm [9306]: vcpu0, guest rIP: 0xffffffff8ce46066 unhandled rdmsr: 0x34 Mar 22 13:36:23 kvm-integration-slave kernel: kvm [9306]: vcpu0, guest rIP: 0xffffffff8ce46066 unhandled rdmsr: 0x606 Mar 22 13:40:27 kvm-integration-slave kernel: kvm [9610]: vcpu0, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x140 Mar 22 13:40:27 kvm-integration-slave kernel: kvm [9610]: vcpu0, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x4e Mar 22 13:40:27 kvm-integration-slave kernel: kvm [9610]: vcpu1, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x140 Mar 22 13:40:27 kvm-integration-slave kernel: kvm [9610]: vcpu1, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x4e Mar 22 13:41:07 kvm-integration-slave kernel: kvm [9610]: vcpu0, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x34 Mar 22 13:41:07 kvm-integration-slave kernel: kvm [9610]: vcpu0, guest rIP: 0xffffffffb8e46066 unhandled rdmsr: 0x606 Mar 22 13:42:34 kvm-integration-slave systemd-udevd[9841]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 13:42:38 kvm-integration-slave kernel: kvm [9956]: vcpu0, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x140 Mar 22 13:42:38 kvm-integration-slave kernel: kvm [9956]: vcpu0, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x4e Mar 22 13:42:38 kvm-integration-slave kernel: kvm [9956]: vcpu1, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x140 Mar 22 13:42:38 kvm-integration-slave kernel: kvm [9956]: vcpu1, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x4e Mar 22 13:43:20 kvm-integration-slave kernel: kvm [9956]: vcpu1, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x34 Mar 22 13:43:20 kvm-integration-slave kernel: kvm [9956]: vcpu1, guest rIP: 0xffffffffb6046066 unhandled rdmsr: 0x606 Mar 22 13:46:50 kvm-integration-slave kernel: kvm [10234]: vcpu0, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x140 Mar 22 13:46:50 kvm-integration-slave kernel: kvm [10234]: vcpu0, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x4e Mar 22 13:46:50 kvm-integration-slave kernel: kvm [10234]: vcpu1, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x140 Mar 22 13:46:50 kvm-integration-slave kernel: kvm [10234]: vcpu1, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x4e Mar 22 13:47:35 kvm-integration-slave kernel: kvm [10234]: vcpu1, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x34 Mar 22 13:47:35 kvm-integration-slave kernel: kvm [10234]: vcpu1, guest rIP: 0xffffffffb9e46066 unhandled rdmsr: 0x606 Mar 22 13:49:01 kvm-integration-slave systemd-udevd[10449]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 13:49:05 kvm-integration-slave kernel: kvm [10564]: vcpu0, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x140 Mar 22 13:49:05 kvm-integration-slave kernel: kvm [10564]: vcpu0, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x4e Mar 22 13:49:05 kvm-integration-slave kernel: kvm [10564]: vcpu1, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x140 Mar 22 13:49:05 kvm-integration-slave kernel: kvm [10564]: vcpu1, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x4e Mar 22 13:49:50 kvm-integration-slave kernel: kvm [10564]: vcpu1, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x34 Mar 22 13:49:50 kvm-integration-slave kernel: kvm [10564]: vcpu1, guest rIP: 0xffffffffa6e46066 unhandled rdmsr: 0x606 Mar 22 13:53:22 kvm-integration-slave kernel: kvm [10871]: vcpu0, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x140 Mar 22 13:53:22 kvm-integration-slave kernel: kvm [10871]: vcpu0, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x4e Mar 22 13:53:22 kvm-integration-slave kernel: kvm [10871]: vcpu1, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x140 Mar 22 13:53:22 kvm-integration-slave kernel: kvm [10871]: vcpu1, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x4e Mar 22 13:54:01 kvm-integration-slave kernel: kvm [10871]: vcpu0, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x34 Mar 22 13:54:01 kvm-integration-slave kernel: kvm [10871]: vcpu0, guest rIP: 0xffffffff94446066 unhandled rdmsr: 0x606 Mar 22 13:55:25 kvm-integration-slave systemd-udevd[11093]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 13:55:28 kvm-integration-slave kernel: kvm [11210]: vcpu0, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x140 Mar 22 13:55:28 kvm-integration-slave kernel: kvm [11210]: vcpu0, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x4e Mar 22 13:55:28 kvm-integration-slave kernel: kvm [11210]: vcpu1, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x140 Mar 22 13:55:28 kvm-integration-slave kernel: kvm [11210]: vcpu1, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x4e Mar 22 13:56:06 kvm-integration-slave kernel: kvm [11210]: vcpu1, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x34 Mar 22 13:56:06 kvm-integration-slave kernel: kvm [11210]: vcpu1, guest rIP: 0xffffffff8aa46066 unhandled rdmsr: 0x606 Mar 22 14:01:35 kvm-integration-slave kernel: kvm [11534]: vcpu0, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x140 Mar 22 14:01:35 kvm-integration-slave kernel: kvm [11534]: vcpu0, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x4e Mar 22 14:01:35 kvm-integration-slave kernel: kvm [11534]: vcpu1, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x140 Mar 22 14:01:35 kvm-integration-slave kernel: kvm [11534]: vcpu1, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x4e Mar 22 14:02:14 kvm-integration-slave kernel: kvm [11534]: vcpu1, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x34 Mar 22 14:02:14 kvm-integration-slave kernel: kvm [11534]: vcpu1, guest rIP: 0xffffffffa0846066 unhandled rdmsr: 0x606 Mar 22 14:03:56 kvm-integration-slave systemd-udevd[11740]: Could not generate persistent MAC address for virbr2: No such file or directory Mar 22 14:04:00 kvm-integration-slave kernel: kvm [11856]: vcpu0, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x140 Mar 22 14:04:00 kvm-integration-slave kernel: kvm [11856]: vcpu0, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x4e Mar 22 14:04:00 kvm-integration-slave kernel: kvm [11856]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x140 Mar 22 14:04:00 kvm-integration-slave kernel: kvm [11856]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x4e Mar 22 14:04:55 kvm-integration-slave kernel: kvm [11856]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x34 Mar 22 14:04:55 kvm-integration-slave kernel: kvm [11856]: vcpu1, guest rIP: 0xffffffffab646066 unhandled rdmsr: 0x606 Mar 22 14:07:15 kvm-integration-slave sudo[13101]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset Mar 22 14:07:15 kvm-integration-slave sudo[13110]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset -f Mar 22 14:07:15 kvm-integration-slave sudo[13132]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /data/* Mar 22 14:07:15 kvm-integration-slave sudo[13134]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /etc/kubernetes/addons Mar 22 14:07:15 kvm-integration-slave sudo[13136]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /var/lib/minikube/* Mar 22 14:07:19 kvm-integration-slave sudo[14030]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/kill 6908 19361 Mar 22 14:07:19 kvm-integration-slave sudo[14051]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/e2e-linux-amd64 -minikube-start-args=--vm-driver=none -minikube-args=--v=10 --logtostderr --bootstrapper=kubeadm -test.v -test.timeout=50m -binary=out/minikube-linux-amd64 Mar 22 14:07:20 kvm-integration-slave sudo[14088]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 14:07:22 kvm-integration-slave sudo[14262]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 14:07:22 kvm-integration-slave sudo[14281]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 14:07:22 kvm-integration-slave sudo[14300]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 14:07:23 kvm-integration-slave sudo[14322]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 14:07:23 kvm-integration-slave sudo[14342]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 14:07:23 kvm-integration-slave sudo[14361]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 14:07:23 kvm-integration-slave sudo[14380]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 14:07:23 kvm-integration-slave sudo[14399]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 14:07:24 kvm-integration-slave sudo[14418]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 14:07:24 kvm-integration-slave sudo[14437]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 14:07:24 kvm-integration-slave sudo[14456]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 14:07:24 kvm-integration-slave sudo[14475]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 14:07:24 kvm-integration-slave sudo[14494]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 14:07:25 kvm-integration-slave sudo[14513]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 14:07:25 kvm-integration-slave sudo[14516]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 14:07:25 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:07:25 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:07:25 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:07:25 kvm-integration-slave sudo[14529]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 14:07:25 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:07:25 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:07:25 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:07:25 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:07:25 kvm-integration-slave sudo[14515]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 14:07:26 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 14:07:26 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 14:07:26 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 14:07:27 kvm-integration-slave sudo[14555]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 14:07:28 kvm-integration-slave sudo[14629]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI Mar 22 14:07:28 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:07:28 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:07:29 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:07:29 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:07:29 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:07:29 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 14:07:29 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 14:07:29 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 14:08:09 kvm-integration-slave systemd-udevd[15841]: Could not generate persistent MAC address for veth59299aa: No such file or directory Mar 22 14:08:09 kvm-integration-slave systemd-udevd[15842]: Could not generate persistent MAC address for veth42c0263: No such file or directory Mar 22 14:08:09 kvm-integration-slave systemd-udevd[15900]: Could not generate persistent MAC address for vethcbd07f5: No such file or directory Mar 22 14:08:09 kvm-integration-slave systemd-udevd[15901]: Could not generate persistent MAC address for vethb71f947: No such file or directory Mar 22 14:09:20 kvm-integration-slave sudo[17133]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:09:20 kvm-integration-slave sudo[17148]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:09:20 kvm-integration-slave sudo[17160]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/sbin/route Mar 22 14:09:21 kvm-integration-slave systemd-udevd[17220]: Could not generate persistent MAC address for veth0730b53: No such file or directory Mar 22 14:09:21 kvm-integration-slave systemd-udevd[17219]: Could not generate persistent MAC address for veth3980b2e: No such file or directory Mar 22 14:09:25 kvm-integration-slave sudo[17514]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:09:25 kvm-integration-slave sudo[17534]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:09:25 kvm-integration-slave sudo[17583]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/sbin/ip route add 10.96.0.0/12 via 10.128.0.3 Mar 22 14:09:26 kvm-integration-slave systemd-udevd[17596]: Could not generate persistent MAC address for vethcb785b2: No such file or directory Mar 22 14:09:26 kvm-integration-slave systemd-udevd[17600]: Could not generate persistent MAC address for veth403d8b9: No such file or directory Mar 22 14:09:26 kvm-integration-slave sudo[17707]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/dmesg -PH -L=never --level warn,err,crit,alert,emerg Mar 22 14:10:10 kvm-integration-slave systemd-udevd[18469]: Could not generate persistent MAC address for veth7a373c7: No such file or directory Mar 22 14:10:10 kvm-integration-slave systemd-udevd[18470]: Could not generate persistent MAC address for veth02dd741: No such file or directory Mar 22 14:10:12 kvm-integration-slave sudo[18673]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset --force Mar 22 14:10:16 kvm-integration-slave sudo[19551]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/sbin/ip route delete 10.96.0.0/12 Mar 22 14:10:17 kvm-integration-slave sudo[19929]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 14:10:17 kvm-integration-slave sudo[19940]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /data/minikube /etc/kubernetes/manifests /var/lib/minikube Mar 22 14:10:17 kvm-integration-slave sudo[19967]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 14:10:20 kvm-integration-slave sudo[20145]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 14:10:20 kvm-integration-slave sudo[20166]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 14:10:20 kvm-integration-slave sudo[20185]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 14:10:20 kvm-integration-slave sudo[20205]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 14:10:20 kvm-integration-slave sudo[20224]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 14:10:20 kvm-integration-slave sudo[20243]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 14:10:21 kvm-integration-slave sudo[20263]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 14:10:21 kvm-integration-slave sudo[20282]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 14:10:21 kvm-integration-slave sudo[20300]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 14:10:21 kvm-integration-slave sudo[20319]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 14:10:21 kvm-integration-slave sudo[20338]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 14:10:22 kvm-integration-slave sudo[20358]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 14:10:22 kvm-integration-slave sudo[20377]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 14:10:22 kvm-integration-slave sudo[20396]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 14:10:22 kvm-integration-slave sudo[20399]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 14:10:22 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:10:22 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:10:22 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:10:22 kvm-integration-slave sudo[20412]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 14:10:22 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:10:22 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:10:22 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:10:22 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:10:22 kvm-integration-slave sudo[20398]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 14:10:22 kvm-integration-slave sudo[20434]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 14:10:23 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 14:10:23 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 14:10:23 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 14:10:24 kvm-integration-slave sudo[20516]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI Mar 22 14:10:24 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:10:24 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:10:24 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:10:24 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:10:24 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:10:25 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 14:10:25 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 14:10:25 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 14:11:02 kvm-integration-slave systemd-udevd[21718]: Could not generate persistent MAC address for veth32c1b72: No such file or directory Mar 22 14:11:02 kvm-integration-slave systemd-udevd[21719]: Could not generate persistent MAC address for vethe2aa1a2: No such file or directory Mar 22 14:11:02 kvm-integration-slave systemd-udevd[21719]: Could not generate persistent MAC address for veth358a2f1: No such file or directory Mar 22 14:11:02 kvm-integration-slave systemd-udevd[21772]: Could not generate persistent MAC address for veth99dd7f7: No such file or directory Mar 22 14:11:04 kvm-integration-slave systemd-udevd[22320]: Could not generate persistent MAC address for veth6570f90: No such file or directory Mar 22 14:11:04 kvm-integration-slave systemd-udevd[22319]: Could not generate persistent MAC address for veth9919a19: No such file or directory Mar 22 14:12:15 kvm-integration-slave sudo[23527]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:12:15 kvm-integration-slave sudo[23541]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:12:16 kvm-integration-slave sudo[23566]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 14:12:16 kvm-integration-slave systemd-udevd[24339]: link_config: could not get ethtool features for veth9919a19 Mar 22 14:12:16 kvm-integration-slave systemd-udevd[24339]: Could not set offload features of veth9919a19: No such device Mar 22 14:12:26 kvm-integration-slave sudo[24499]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 14:12:28 kvm-integration-slave sudo[24682]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 14:12:29 kvm-integration-slave sudo[24701]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 14:12:29 kvm-integration-slave sudo[24720]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 14:12:29 kvm-integration-slave sudo[24739]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 14:12:29 kvm-integration-slave sudo[24759]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 14:12:29 kvm-integration-slave sudo[24778]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 14:12:30 kvm-integration-slave sudo[24797]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 14:12:30 kvm-integration-slave sudo[24816]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 14:12:30 kvm-integration-slave sudo[24835]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 14:12:30 kvm-integration-slave sudo[24854]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 14:12:30 kvm-integration-slave sudo[24873]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 14:12:31 kvm-integration-slave sudo[24892]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 14:12:31 kvm-integration-slave sudo[24911]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 14:12:31 kvm-integration-slave sudo[24930]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 14:12:31 kvm-integration-slave sudo[24933]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 14:12:31 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:12:31 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:12:31 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:12:31 kvm-integration-slave sudo[24946]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 14:12:31 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:12:31 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:12:31 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:12:31 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:12:31 kvm-integration-slave sudo[24932]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 14:12:31 kvm-integration-slave sudo[24967]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 14:12:33 kvm-integration-slave sudo[25681]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase certs all --config /var/lib/kubeadm.yaml Mar 22 14:12:33 kvm-integration-slave sudo[25695]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase kubeconfig all --config /var/lib/kubeadm.yaml Mar 22 14:12:35 kvm-integration-slave sudo[25725]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase control-plane all --config /var/lib/kubeadm.yaml Mar 22 14:12:35 kvm-integration-slave sudo[25735]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase etcd local --config /var/lib/kubeadm.yaml Mar 22 14:12:41 kvm-integration-slave sudo[25748]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:12:43 kvm-integration-slave systemd-udevd[25777]: Could not generate persistent MAC address for vetha54149b: No such file or directory Mar 22 14:12:43 kvm-integration-slave systemd-udevd[25778]: Could not generate persistent MAC address for vethd7f9d46: No such file or directory Mar 22 14:12:43 kvm-integration-slave systemd-udevd[25933]: Could not generate persistent MAC address for vethe2cc1d3: No such file or directory Mar 22 14:12:43 kvm-integration-slave systemd-udevd[25932]: Could not generate persistent MAC address for veth9ed400c: No such file or directory Mar 22 14:12:44 kvm-integration-slave systemd-udevd[25964]: Could not generate persistent MAC address for veth85c3b17: No such file or directory Mar 22 14:12:44 kvm-integration-slave systemd-udevd[25963]: Could not generate persistent MAC address for veth0d8d229: No such file or directory Mar 22 14:12:51 kvm-integration-slave sudo[26418]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:12:51 kvm-integration-slave sudo[26430]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset --force Mar 22 14:12:57 kvm-integration-slave sudo[27610]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 14:12:57 kvm-integration-slave sudo[27621]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /data/minikube /etc/kubernetes/manifests /var/lib/minikube Mar 22 14:12:58 kvm-integration-slave sudo[27684]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 14:13:00 kvm-integration-slave sudo[27858]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 14:13:00 kvm-integration-slave sudo[27878]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 14:13:00 kvm-integration-slave sudo[27899]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 14:13:00 kvm-integration-slave sudo[27918]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 14:13:01 kvm-integration-slave sudo[27937]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 14:13:01 kvm-integration-slave sudo[27957]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 14:13:01 kvm-integration-slave sudo[27976]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 14:13:01 kvm-integration-slave sudo[27996]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 14:13:02 kvm-integration-slave sudo[28016]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 14:13:03 kvm-integration-slave sudo[28036]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 14:13:03 kvm-integration-slave sudo[28055]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 14:13:03 kvm-integration-slave sudo[28074]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 14:13:03 kvm-integration-slave sudo[28093]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 14:13:04 kvm-integration-slave sudo[28112]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 14:13:04 kvm-integration-slave sudo[28115]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 14:13:04 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:13:04 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:13:04 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:13:04 kvm-integration-slave sudo[28128]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 14:13:04 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:13:04 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:13:04 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:13:04 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:13:04 kvm-integration-slave sudo[28114]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 14:13:04 kvm-integration-slave sudo[28149]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 14:13:04 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 14:13:04 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 14:13:04 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 14:13:05 kvm-integration-slave sudo[28229]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI Mar 22 14:13:06 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:13:06 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:13:06 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:13:06 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:13:06 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:13:06 kvm-integration-slave systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 22 14:13:06 kvm-integration-slave systemd[1]: kubelet.service: Unit entered failed state. Mar 22 14:13:06 kvm-integration-slave systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 22 14:13:43 kvm-integration-slave systemd-udevd[29480]: Could not generate persistent MAC address for veth1ada1c3: No such file or directory Mar 22 14:13:43 kvm-integration-slave systemd-udevd[29478]: Could not generate persistent MAC address for vethb1ff160: No such file or directory Mar 22 14:13:43 kvm-integration-slave systemd-udevd[29510]: Could not generate persistent MAC address for vethf83175e: No such file or directory Mar 22 14:13:43 kvm-integration-slave systemd-udevd[29511]: Could not generate persistent MAC address for veth74605bf: No such file or directory Mar 22 14:13:44 kvm-integration-slave systemd-udevd[30047]: Could not generate persistent MAC address for vethd73cc5c: No such file or directory Mar 22 14:13:44 kvm-integration-slave systemd-udevd[30048]: Could not generate persistent MAC address for veth8f461c5: No such file or directory Mar 22 14:14:57 kvm-integration-slave sudo[31269]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:14:57 kvm-integration-slave sudo[31285]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:14:57 kvm-integration-slave sudo[31313]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 14:14:58 kvm-integration-slave systemd-udevd[32026]: link_config: could not get ethtool features for vethb1ff160 Mar 22 14:14:58 kvm-integration-slave systemd-udevd[32026]: Could not set offload features of vethb1ff160: No such device Mar 22 14:15:08 kvm-integration-slave sudo[32245]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl restart docker Mar 22 14:15:10 kvm-integration-slave sudo[32421]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause-amd64_3.1 Mar 22 14:15:10 kvm-integration-slave sudo[32440]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/storage-provisioner_v1.8.1 Mar 22 14:15:10 kvm-integration-slave sudo[32459]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.8 Mar 22 14:15:11 kvm-integration-slave sudo[32479]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/pause_3.1 Mar 22 14:15:11 kvm-integration-slave sudo[32499]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-scheduler-amd64_v1.13.4 Mar 22 14:15:11 kvm-integration-slave sudo[32519]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.8 Mar 22 14:15:11 kvm-integration-slave sudo[32538]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-controller-manager-amd64_v1.13.4 Mar 22 14:15:11 kvm-integration-slave sudo[32557]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.8 Mar 22 14:15:12 kvm-integration-slave sudo[32576]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/coredns_1.2.6 Mar 22 14:15:12 kvm-integration-slave sudo[32596]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-apiserver-amd64_v1.13.4 Mar 22 14:15:12 kvm-integration-slave sudo[32615]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-addon-manager_v8.6 Mar 22 14:15:12 kvm-integration-slave sudo[32633]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kube-proxy-amd64_v1.13.4 Mar 22 14:15:12 kvm-integration-slave sudo[32652]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1 Mar 22 14:15:13 kvm-integration-slave sudo[32672]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /tmp/etcd-amd64_3.2.24 Mar 22 14:15:13 kvm-integration-slave sudo[32675]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl daemon-reload Mar 22 14:15:13 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:15:13 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:15:13 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:15:13 kvm-integration-slave sudo[32688]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl enable kubelet Mar 22 14:15:13 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:15:13 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/expand-root.service is marked executable. Please remove executable permission bits. Proceeding anyway. Mar 22 14:15:13 kvm-integration-slave systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:15:13 kvm-integration-slave systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. Mar 22 14:15:13 kvm-integration-slave sudo[32674]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl start kubelet Mar 22 14:15:13 kvm-integration-slave sudo[32709]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm config images pull --config /var/lib/kubeadm.yaml Mar 22 14:15:15 kvm-integration-slave sudo[1008]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase certs all --config /var/lib/kubeadm.yaml Mar 22 14:15:15 kvm-integration-slave sudo[1022]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase kubeconfig all --config /var/lib/kubeadm.yaml Mar 22 14:15:17 kvm-integration-slave sudo[1051]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase control-plane all --config /var/lib/kubeadm.yaml Mar 22 14:15:17 kvm-integration-slave sudo[1062]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm init phase etcd local --config /var/lib/kubeadm.yaml Mar 22 14:15:23 kvm-integration-slave sudo[1081]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:15:25 kvm-integration-slave systemd-udevd[1109]: Could not generate persistent MAC address for veth1088e87: No such file or directory Mar 22 14:15:25 kvm-integration-slave systemd-udevd[1110]: Could not generate persistent MAC address for veth4e74c34: No such file or directory Mar 22 14:15:25 kvm-integration-slave systemd-udevd[1135]: Could not generate persistent MAC address for veth74385b1: No such file or directory Mar 22 14:15:25 kvm-integration-slave systemd-udevd[1134]: Could not generate persistent MAC address for vethff18fad: No such file or directory Mar 22 14:15:25 kvm-integration-slave systemd-udevd[1422]: Could not generate persistent MAC address for veth43c76d4: No such file or directory Mar 22 14:15:25 kvm-integration-slave systemd-udevd[1423]: Could not generate persistent MAC address for veth1c68731: No such file or directory Mar 22 14:15:33 kvm-integration-slave sudo[1806]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl is-active kubelet Mar 22 14:15:33 kvm-integration-slave sudo[1819]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/usr/bin/kubeadm reset --force Mar 22 14:15:39 kvm-integration-slave sudo[3000]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/systemctl stop kubelet.service Mar 22 14:15:39 kvm-integration-slave sudo[3011]: root : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/rm -rf /data/minikube /etc/kubernetes/manifests /var/lib/minikube Mar 22 14:15:39 kvm-integration-slave sudo[3029]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=/bin/cat /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig Mar 22 14:15:39 kvm-integration-slave sudo[3039]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/minikube-linux-amd64 status Mar 22 14:15:40 kvm-integration-slave sudo[3052]: jenkins : TTY=unknown ; PWD=/home/jenkins/workspace/Linux_Integration_Tests_none ; USER=root ; COMMAND=out/minikube-linux-amd64 ip +++ free total used free shared buff/cache available Mem: 15404732 629096 11786372 173004 2989264 14268412 Swap: 0 0 0 +++ cat /etc/VERSION cat: /etc/VERSION: No such file or directory +++ type -P virsh /usr/bin/virsh +++ virsh -c qemu:///system list --all Id Name State ---------------------------------------------------- +++ type -P vboxmanage /usr/bin/vboxmanage +++ vboxmanage list vms +++ type -P hdiutil +++ netstat -rn -f inet Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 10.128.0.1 0.0.0.0 UG 0 0 0 eth0 10.128.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.42.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr1 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 +++ echo '' +++ echo '>>> end print-debug-info' >>> end print-debug-info +++ echo '' +++ set -e ++ echo '>> Cleaning up after ourselves ...' >> Cleaning up after ourselves ... ++ sudo -E out/minikube-linux-amd64 tunnel --cleanup ++ sudo -E out/minikube-linux-amd64 delete ++ cleanup_stale_routes ++ local 'show=netstat -rn -f inet' ++ local 'del=sudo route -n delete' +++ uname ++ [[ Linux == \L\i\n\u\x ]] ++ show='ip route show' ++ del='sudo ip route delete' +++ ip route show +++ awk '{ print $1 }' +++ grep 10.96.0.0 +++ true ++ local troutes= ++ sudo -E rm -Rf /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/.minikube ++ sudo -E rm -f /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a/kubeconfig ++ rmdir /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a +++ date ++ echo '>> /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a completed at Fri Mar 22 14:15:40 UTC 2019' >> /home/jenkins/minikube-integration/linux-amd64-none-3714-13100-daec030cdfb543e2314b0e13d7a55b0d5901de0a completed at Fri Mar 22 14:15:40 UTC 2019 ++ [[ 3714 != \m\a\s\t\e\r ]] ++ readonly target_url=https://storage.googleapis.com/minikube-builds/logs/3714/Linux-None.txt ++ target_url=https://storage.googleapis.com/minikube-builds/logs/3714/Linux-None.txt ++ curl -s 'https://api.github.com/repos/kubernetes/minikube/statuses/daec030cdfb543e2314b0e13d7a55b0d5901de0a?access_token=****' -H 'Content-Type: application/json' -X POST -d '{"state": "failure", "description": "Jenkins", "target_url": "https://storage.googleapis.com/minikube-builds/logs/3714/Linux-None.txt", "context": "Linux-None"}' { "url": "https://api.github.com/repos/kubernetes/minikube/statuses/daec030cdfb543e2314b0e13d7a55b0d5901de0a", "avatar_url": "https://avatars1.githubusercontent.com/u/20374350?v=4", "id": 6474623465, "node_id": "MDEzOlN0YXR1c0NvbnRleHQ2NDc0NjIzNDY1", "state": "failure", "description": "Jenkins", "target_url": "https://storage.googleapis.com/minikube-builds/logs/3714/Linux-None.txt", "context": "Linux-None", "created_at": "2019-03-22T14:15:41Z", "updated_at": "2019-03-22T14:15:41Z", "creator": { "login": "minikube-bot", "id": 20374350, "node_id": "MDQ6VXNlcjIwMzc0MzUw", "avatar_url": "https://avatars1.githubusercontent.com/u/20374350?v=4", "gravatar_id": "", "url": "https://api.github.com/users/minikube-bot", "html_url": "https://github.com/minikube-bot", "followers_url": "https://api.github.com/users/minikube-bot/followers", "following_url": "https://api.github.com/users/minikube-bot/following{/other_user}", "gists_url": "https://api.github.com/users/minikube-bot/gists{/gist_id}", "starred_url": "https://api.github.com/users/minikube-bot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/minikube-bot/subscriptions", "organizations_url": "https://api.github.com/users/minikube-bot/orgs", "repos_url": "https://api.github.com/users/minikube-bot/repos", "events_url": "https://api.github.com/users/minikube-bot/events{/privacy}", "received_events_url": "https://api.github.com/users/minikube-bot/received_events", "type": "User", "site_admin": false } } ++ exit 1 Build step 'Execute shell' marked build as failure