=== RUN TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run: out/minikube-linux-amd64 pause -p old-k8s-version-20210609012901-9941 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-20210609012901-9941 --alsologtostderr -v=1: exit status 80 (3.080283228s)
-- stdout --
* Pausing node old-k8s-version-20210609012901-9941 ...
-- /stdout --
** stderr **
I0609 01:41:59.186429 373430 out.go:291] Setting OutFile to fd 1 ...
I0609 01:41:59.186505 373430 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0609 01:41:59.186509 373430 out.go:304] Setting ErrFile to fd 2...
I0609 01:41:59.186512 373430 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0609 01:41:59.186607 373430 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
I0609 01:41:59.186755 373430 out.go:298] Setting JSON to false
I0609 01:41:59.186773 373430 mustload.go:65] Loading cluster: old-k8s-version-20210609012901-9941
I0609 01:41:59.187362 373430 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210609012901-9941 --format={{.State.Status}}
I0609 01:41:59.225850 373430 host.go:66] Checking if "old-k8s-version-20210609012901-9941" exists ...
I0609 01:41:59.226895 373430 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:%!s(int=2) cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hy
perkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso https://github.com/kubernetes/minikube/releases/download/v1.20.0/minikube-v1.20.0.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.20.0.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old
-k8s-version-20210609012901-9941 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 showbootstrapperdeprecationnotification:%!s(bool=true) showdriverdeprecationnotification:%!s(bool=true) ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantkubectldownloadmsg:%!s(bool=true) wantnonedriverwarning:%!s(bool=true) wantreporterror:%!s(bool=false) wantreporterrorprompt:%!s(bool=true) wantupdatenotification:%!s(bool=true)]="(MISSING)"
I0609 01:41:59.229384 373430 out.go:170] * Pausing node old-k8s-version-20210609012901-9941 ...
I0609 01:41:59.229409 373430 host.go:66] Checking if "old-k8s-version-20210609012901-9941" exists ...
I0609 01:41:59.229744 373430 ssh_runner.go:149] Run: systemctl --version
I0609 01:41:59.229791 373430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210609012901-9941
I0609 01:41:59.267633 373430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/old-k8s-version-20210609012901-9941/id_rsa Username:docker}
I0609 01:41:59.357566 373430 ssh_runner.go:149] Run: sudo systemctl disable kubelet
I0609 01:41:59.480954 373430 retry.go:31] will retry after 276.165072ms: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
stdout:
stderr:
Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable kubelet
update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
I0609 01:41:59.757382 373430 ssh_runner.go:149] Run: sudo systemctl disable kubelet
I0609 01:41:59.875735 373430 retry.go:31] will retry after 540.190908ms: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
stdout:
stderr:
Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable kubelet
update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
I0609 01:42:00.416476 373430 ssh_runner.go:149] Run: sudo systemctl disable kubelet
I0609 01:42:00.527096 373430 retry.go:31] will retry after 655.06503ms: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
stdout:
stderr:
Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable kubelet
update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
I0609 01:42:01.182816 373430 ssh_runner.go:149] Run: sudo systemctl disable kubelet
I0609 01:42:01.290687 373430 retry.go:31] will retry after 791.196345ms: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
stdout:
stderr:
Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable kubelet
update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
I0609 01:42:02.082620 373430 ssh_runner.go:149] Run: sudo systemctl disable kubelet
I0609 01:42:02.191443 373430 out.go:170]
W0609 01:42:02.191575 373430 out.go:235] X Exiting due to GUEST_PAUSE: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
stdout:
stderr:
Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable kubelet
update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
X Exiting due to GUEST_PAUSE: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
stdout:
stderr:
Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable kubelet
update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
W0609 01:42:02.191596 373430 out.go:235] *
*
W0609 01:42:02.202509 373430 out.go:235] ╭──────────────────────────────────────────────────────────────────────────────╮
╭──────────────────────────────────────────────────────────────────────────────╮
W0609 01:42:02.202529 373430 out.go:235] │ │
│ │
W0609 01:42:02.202534 373430 out.go:235] │ * If the above advice does not help, please let us know: │
│ * If the above advice does not help, please let us know: │
W0609 01:42:02.202555 373430 out.go:235] │ https://github.com/kubernetes/minikube/issues/new/choose │
│ https://github.com/kubernetes/minikube/issues/new/choose │
W0609 01:42:02.202560 373430 out.go:235] │ │
│ │
W0609 01:42:02.202565 373430 out.go:235] │ * Please attach the following file to the GitHub issue: │
│ * Please attach the following file to the GitHub issue: │
W0609 01:42:02.202571 373430 out.go:235] │ * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log │
│ * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log │
W0609 01:42:02.202578 373430 out.go:235] │ │
│ │
W0609 01:42:02.202583 373430 out.go:235] ╰──────────────────────────────────────────────────────────────────────────────╯
╰──────────────────────────────────────────────────────────────────────────────╯
W0609 01:42:02.202587 373430 out.go:235]
I0609 01:42:02.204165 373430 out.go:170]
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p old-k8s-version-20210609012901-9941 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:227: (dbg) Run: docker inspect old-k8s-version-20210609012901-9941
helpers_test.go:231: (dbg) docker inspect old-k8s-version-20210609012901-9941:
-- stdout --
[
{
"Id": "91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f",
"Created": "2021-06-09T01:32:22.976408213Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 300855,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-06-09T01:34:39.439780041Z",
"FinishedAt": "2021-06-09T01:34:37.912284168Z"
},
"Image": "sha256:9fce26cb202ecbcb479d0e9dcc943ed048e5957c0bb68667d9476ebc413ee6d7",
"ResolvConfPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/hostname",
"HostsPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/hosts",
"LogPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f-json.log",
"Name": "/old-k8s-version-20210609012901-9941",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-20210609012901-9941:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-20210609012901-9941",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614-init/diff:/var/lib/docker/overlay2/bc56a5d6f9b885d4e990c356e0ccfc01ecbed88f252ebfaa9441de3180832d7f/diff:/var/lib/docker/overlay2/25b993e35a4369dc1c3bb5a1579e6e35329eea51bcbd403abb32859a67061a54/diff:/var/lib/docker/overlay2/1fe8141f79894ceaa71723e3cebb26aaf6eb09b92957f7ef1ad563a53df17477/diff:/var/lib/docker/overlay2/c43074dca065bc9311721e20aecd4b6af65294c44e7d9ff6f84a18717d22f9da/diff:/var/lib/docker/overlay2/1318b2c7f3cf224a7ccebeb69bbc1127489945bbb88c21f3171770868a161187/diff:/var/lib/docker/overlay2/c38fd14f646377d81cc91524a921d99d0518ca09e12d17c45948037013fd9100/diff:/var/lib/docker/overlay2/3860f2d47e6d7da92eb5946fda824e25f4c789d00d7e8daa71d0200aac14b536/diff:/var/lib/docker/overlay2/f55aac0c255ec87a42f4d6bc6e79a51ccac3a1d472b1ef4565f141af1acedb04/diff:/var/lib/docker/overlay2/7a1f3b94ec1a7fec96e3f1c789cb025636706f45db2f63cafd48827296910d1d/diff:/var/lib/docker/overlay2/653b9d
24f60635898ac8c6e1b372c54937a708e1e483d47012bc30c58bba0c8c/diff:/var/lib/docker/overlay2/c1832b167afb6406029f607ff5bfad73774ce698299c2b90633d157123654c52/diff:/var/lib/docker/overlay2/75fc291915e6994891ddc9a151bd4c24056ab74e6c8428ba1aef2b2949bbc56e/diff:/var/lib/docker/overlay2/8187764e5fdd094760f8daef22c41c28995fd009c1c56d956db1bb78266b84b2/diff:/var/lib/docker/overlay2/8257db85fb8192780c9e79b131704c61b85e47f9e5c7152097b1a341d06f5840/diff:/var/lib/docker/overlay2/e7499e6556225f397b775719266146f16285f25036f4cf348b09e2fd3be18982/diff:/var/lib/docker/overlay2/84dea696e080b4925128f5b32c22c548c34a63a9dfafa5cb45a932dded279620/diff:/var/lib/docker/overlay2/0646a50eb26264b2a4349823800615095034ab376268714c37e1193106307a2a/diff:/var/lib/docker/overlay2/873d4336e86132442a84ef0da60e4f8fdf8e4989093c0f2a4279120e10ad4f2c/diff:/var/lib/docker/overlay2/44007c68fc2016e815ed96a5faadd25bfb35c362bf1b0521c430ef2ea3805f42/diff:/var/lib/docker/overlay2/7f832f8cf06c783bc6789b50392d803201e52f6baa4a788b5ce48169c94316eb/diff:/var/lib/d
ocker/overlay2/aa919f3d56d7f8b40e56ee381db724e83ee09c96eb696e67326ae47e81324228/diff:/var/lib/docker/overlay2/c53704cae60bb8bd8b355c2d6fb142c9e105dbfeeece4ba9ee0eb81aaaa83fe9/diff:/var/lib/docker/overlay2/1d80475a809da44174d557238fbb00860567d808a157fc2291ac5fedb6f8b2d2/diff:/var/lib/docker/overlay2/d7e1256a346a88b7ce7e6fe9d6ab1146a2c7705c99fcb974ad10b671573b6b83/diff:/var/lib/docker/overlay2/67dc882ee4f992f5a9dc58b56bf7d7a6e78ffe50ccd6227d33d9e2047b7ff877/diff:/var/lib/docker/overlay2/156a8e643f241fdf84afe135ad766dbedd0c515a725939d012de628eb9dd2013/diff:/var/lib/docker/overlay2/ee244a7deb19ed9dc719af435d92c54624874690ce0999c7d030e2f57ecb9e6a/diff:/var/lib/docker/overlay2/91f8a889599c1faaa7f40cc449793deff620d17e83e88dac22c223f131237b12/diff:/var/lib/docker/overlay2/fa8fc61ecf97cd7f2b96efc9d54ba3d9a5b32dcdbb844f360ee173af8fae43a7/diff:/var/lib/docker/overlay2/908106b57878c9eeda6e0d202eee052dee30050250f2a3e5c7d61739d6548623/diff:/var/lib/docker/overlay2/98083c942683a1ac5defcb4b953ba78bbab830ad8c88c4dd145379ebe55
e20a9/diff:/var/lib/docker/overlay2/980703603c9fd3a987c703f9800e56f69031cc7d19f3c692d95eb0937cbb5fd7/diff:/var/lib/docker/overlay2/bc7be9aeb566f06fe346d144629a571aec3e378e82aedf4d6c3fb065569091b2/diff:/var/lib/docker/overlay2/e61aabb9eb2161801d4795e4a00f41afd54c504a52aeeef70d49d2a4f47fcd99/diff:/var/lib/docker/overlay2/a69e80d9160e6158cf9f37881d60928bf3221341b1fffe8d2855488233278102/diff:/var/lib/docker/overlay2/f76fd1ba3588d22f5228ab597df7a62e20a79217c1712dbc33e20061e12891c6/diff",
"MergedDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/merged",
"UpperDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/diff",
"WorkDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-20210609012901-9941",
"Source": "/var/lib/docker/volumes/old-k8s-version-20210609012901-9941/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-20210609012901-9941",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-20210609012901-9941",
"name.minikube.sigs.k8s.io": "old-k8s-version-20210609012901-9941",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "1aaecc7a078c61af85d4e6c7c12ffcbc3226c3c0b6bdcdb83ef76e454d99e1ed",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32960"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32959"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32956"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32958"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32957"
}
]
},
"SandboxKey": "/var/run/docker/netns/1aaecc7a078c",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-20210609012901-9941": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"91dce77935ba"
],
"NetworkID": "3b40e12707af96d7a87ef0baaec85159df278a3dc4bf817ecae3932e0bcfbdd2",
"EndpointID": "c1650ce3840b80594246acc2f9fcfa432a39e6b48bada03c110930f25ecac707",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:235: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210609012901-9941 -n old-k8s-version-20210609012901-9941
helpers_test.go:240: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:243: (dbg) Run: out/minikube-linux-amd64 -p old-k8s-version-20210609012901-9941 logs -n 25
helpers_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20210609012901-9941 logs -n 25: (1.486683584s)
helpers_test.go:248: TestStartStop/group/old-k8s-version/serial/Pause logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
| stop | -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:37:37 UTC | Wed, 09 Jun 2021 01:37:48 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:37:48 UTC | Wed, 09 Jun 2021 01:37:48 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | |
| start | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:31:33 UTC | Wed, 09 Jun 2021 01:37:54 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --embed-certs | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.20.7 | | | | | |
| ssh | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:05 UTC | Wed, 09 Jun 2021 01:38:05 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| | sudo crictl images -o json | | | | | |
| pause | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:05 UTC | Wed, 09 Jun 2021 01:38:06 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| start | -p newest-cni-20210609013655-9941 --memory=2200 | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:37:48 UTC | Wed, 09 Jun 2021 01:38:07 UTC |
| | --alsologtostderr --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true --network-plugin=cni | | | | | |
| | --extra-config=kubelet.network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 | | | | | |
| | --driver=docker --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.22.0-alpha.2 | | | | | |
| unpause | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:07 UTC | Wed, 09 Jun 2021 01:38:07 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:07 UTC | Wed, 09 Jun 2021 01:38:08 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| | sudo crictl images -o json | | | | | |
| pause | -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:08 UTC | Wed, 09 Jun 2021 01:38:08 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:09 UTC | Wed, 09 Jun 2021 01:38:10 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:08 UTC | Wed, 09 Jun 2021 01:38:11 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| delete | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:11 UTC | Wed, 09 Jun 2021 01:38:12 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| delete | -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:11 UTC | Wed, 09 Jun 2021 01:38:14 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| delete | -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:14 UTC | Wed, 09 Jun 2021 01:38:14 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| start | -p false-20210609012810-9941 | false-20210609012810-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:14 UTC | Wed, 09 Jun 2021 01:39:52 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=false --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p false-20210609012810-9941 | false-20210609012810-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:39:52 UTC | Wed, 09 Jun 2021 01:39:52 UTC |
| | pgrep -a kubelet | | | | | |
| delete | -p false-20210609012810-9941 | false-20210609012810-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:07 UTC | Wed, 09 Jun 2021 01:40:10 UTC |
| start | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:32:07 UTC | Wed, 09 Jun 2021 01:40:19 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --apiserver-port=8444 | | | | | |
| | --driver=docker --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.20.7 | | | | | |
| ssh | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:29 UTC | Wed, 09 Jun 2021 01:40:30 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| | sudo crictl images -o json | | | | | |
| pause | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:30 UTC | Wed, 09 Jun 2021 01:40:30 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:31 UTC | Wed, 09 Jun 2021 01:40:32 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:32 UTC | Wed, 09 Jun 2021 01:40:36 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| delete | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:36 UTC | Wed, 09 Jun 2021 01:40:36 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| start | -p | old-k8s-version-20210609012901-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:34:38 UTC | Wed, 09 Jun 2021 01:41:48 UTC |
| | old-k8s-version-20210609012901-9941 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.14.0 | | | | | |
| ssh | -p | old-k8s-version-20210609012901-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:41:58 UTC | Wed, 09 Jun 2021 01:41:59 UTC |
| | old-k8s-version-20210609012901-9941 | | | | | |
| | sudo crictl images -o json | | | | | |
|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/06/09 01:40:36
Running on machine: debian-jenkins-agent-1
Binary: Built with gc go1.16.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0609 01:40:36.631110 352096 out.go:291] Setting OutFile to fd 1 ...
I0609 01:40:36.631229 352096 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0609 01:40:36.631240 352096 out.go:304] Setting ErrFile to fd 2...
I0609 01:40:36.631245 352096 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0609 01:40:36.631477 352096 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
I0609 01:40:36.632033 352096 out.go:298] Setting JSON to false
I0609 01:40:36.673982 352096 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":5000,"bootTime":1623197837,"procs":265,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0609 01:40:36.674111 352096 start.go:121] virtualization: kvm guest
I0609 01:40:36.676163 352096 out.go:170] * [calico-20210609012810-9941] minikube v1.21.0-beta.0 on Debian 9.13 (kvm/amd64)
I0609 01:40:36.678185 352096 out.go:170] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
I0609 01:40:36.679873 352096 out.go:170] - MINIKUBE_BIN=out/minikube-linux-amd64
I0609 01:40:36.681411 352096 out.go:170] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube
I0609 01:40:36.683678 352096 out.go:170] - MINIKUBE_LOCATION=11610
I0609 01:40:36.685630 352096 driver.go:335] Setting default libvirt URI to qemu:///system
I0609 01:40:36.743399 352096 docker.go:132] docker version: linux-19.03.15
I0609 01:40:36.743512 352096 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0609 01:40:36.834766 352096 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:133 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-06-09 01:40:36.791625716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0609 01:40:36.834840 352096 docker.go:244] overlay module found
I0609 01:40:36.837087 352096 out.go:170] * Using the docker driver based on user configuration
I0609 01:40:36.837110 352096 start.go:279] selected driver: docker
I0609 01:40:36.837115 352096 start.go:752] validating driver "docker" against <nil>
I0609 01:40:36.837133 352096 start.go:763] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0609 01:40:36.837178 352096 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0609 01:40:36.837196 352096 out.go:235] ! Your cgroup does not allow setting memory.
I0609 01:40:36.838992 352096 out.go:170] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0609 01:40:36.839863 352096 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0609 01:40:36.932062 352096 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:133 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-06-09 01:40:36.890557056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0609 01:40:36.932180 352096 start_flags.go:259] no existing cluster config was found, will generate one from the flags
I0609 01:40:36.932334 352096 start_flags.go:656] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0609 01:40:36.932354 352096 cni.go:93] Creating CNI manager for "calico"
I0609 01:40:36.932360 352096 start_flags.go:268] Found "Calico" CNI - setting NetworkPlugin=cni
I0609 01:40:36.932385 352096 start_flags.go:273] config:
{Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0609 01:40:36.934649 352096 out.go:170] * Starting control plane node calico-20210609012810-9941 in cluster calico-20210609012810-9941
I0609 01:40:36.934693 352096 cache.go:115] Beginning downloading kic base image for docker with docker
I0609 01:40:36.936147 352096 out.go:170] * Pulling base image ...
I0609 01:40:36.936172 352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0609 01:40:36.936194 352096 preload.go:125] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4
I0609 01:40:36.936205 352096 cache.go:54] Caching tarball of preloaded images
I0609 01:40:36.936277 352096 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
I0609 01:40:36.936357 352096 preload.go:166] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0609 01:40:36.936376 352096 cache.go:57] Finished verifying existence of preloaded tar for v1.20.7 on docker
I0609 01:40:36.936388 352096 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory
I0609 01:40:36.936410 352096 image.go:61] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory, skipping pull
I0609 01:40:36.936420 352096 image.go:102] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in cache, skipping pull
I0609 01:40:36.936434 352096 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 as a tarball
I0609 01:40:36.936440 352096 image.go:74] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon
I0609 01:40:36.936479 352096 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json ...
I0609 01:40:36.936497 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json: {Name:mk031fde7609ae3e97daec785ed839e7488473bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:37.048612 352096 image.go:78] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon, skipping pull
I0609 01:40:37.048657 352096 cache.go:146] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in daemon, skipping load
I0609 01:40:37.048675 352096 cache.go:202] Successfully downloaded all kic artifacts
I0609 01:40:37.048728 352096 start.go:313] acquiring machines lock for calico-20210609012810-9941: {Name:mkae53a330b20aaf52e1813b8aee573fcaaec970 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0609 01:40:37.048858 352096 start.go:317] acquired machines lock for "calico-20210609012810-9941" in 106.275µs
I0609 01:40:37.048894 352096 start.go:89] Provisioning new machine with config: &{Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
I0609 01:40:37.049004 352096 start.go:126] createHost starting for "" (driver="docker")
I0609 01:40:34.017726 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:37.085772 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:35.678351 300573 out.go:170] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0609 01:40:35.678380 300573 addons.go:344] enableAddons completed in 2.095265934s
I0609 01:40:35.865805 300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:38.366329 300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:35.493169 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:35.992256 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:36.492949 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:36.992808 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:37.492406 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:37.992460 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:38.492814 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:38.993013 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:39.492346 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:39.992376 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:37.051194 352096 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0609 01:40:37.051469 352096 start.go:160] libmachine.API.Create for "calico-20210609012810-9941" (driver="docker")
I0609 01:40:37.051513 352096 client.go:168] LocalClient.Create starting
I0609 01:40:37.051649 352096 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem
I0609 01:40:37.051689 352096 main.go:128] libmachine: Decoding PEM data...
I0609 01:40:37.051712 352096 main.go:128] libmachine: Parsing certificate...
I0609 01:40:37.051880 352096 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem
I0609 01:40:37.051910 352096 main.go:128] libmachine: Decoding PEM data...
I0609 01:40:37.051926 352096 main.go:128] libmachine: Parsing certificate...
I0609 01:40:37.052424 352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0609 01:40:37.099637 352096 cli_runner.go:162] docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0609 01:40:37.099719 352096 network_create.go:255] running [docker network inspect calico-20210609012810-9941] to gather additional debugging logs...
I0609 01:40:37.099742 352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941
W0609 01:40:37.138707 352096 cli_runner.go:162] docker network inspect calico-20210609012810-9941 returned with exit code 1
I0609 01:40:37.138742 352096 network_create.go:258] error running [docker network inspect calico-20210609012810-9941]: docker network inspect calico-20210609012810-9941: exit status 1
stdout:
[]
stderr:
Error: No such network: calico-20210609012810-9941
I0609 01:40:37.138765 352096 network_create.go:260] output of [docker network inspect calico-20210609012810-9941]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: calico-20210609012810-9941
** /stderr **
I0609 01:40:37.138809 352096 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0609 01:40:37.177770 352096 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3efa0710be1e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6f:c7:95:89}}
I0609 01:40:37.178451 352096 network.go:263] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00072a3b8] misses:0}
I0609 01:40:37.178494 352096 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0609 01:40:37.178511 352096 network_create.go:106] attempt to create docker network calico-20210609012810-9941 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0609 01:40:37.178562 352096 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210609012810-9941
I0609 01:40:37.256968 352096 network_create.go:90] docker network calico-20210609012810-9941 192.168.58.0/24 created
I0609 01:40:37.257004 352096 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20210609012810-9941" container
I0609 01:40:37.257070 352096 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0609 01:40:37.300737 352096 cli_runner.go:115] Run: docker volume create calico-20210609012810-9941 --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --label created_by.minikube.sigs.k8s.io=true
I0609 01:40:37.340542 352096 oci.go:102] Successfully created a docker volume calico-20210609012810-9941
I0609 01:40:37.340623 352096 cli_runner.go:115] Run: docker run --rm --name calico-20210609012810-9941-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --entrypoint /usr/bin/test -v calico-20210609012810-9941:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib
I0609 01:40:38.148995 352096 oci.go:106] Successfully prepared a docker volume calico-20210609012810-9941
W0609 01:40:38.149052 352096 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0609 01:40:38.149065 352096 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0609 01:40:38.149126 352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0609 01:40:38.149132 352096 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0609 01:40:38.149158 352096 kic.go:179] Starting extracting preloaded images to volume ...
I0609 01:40:38.149224 352096 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210609012810-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir
I0609 01:40:38.241538 352096 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210609012810-9941 --name calico-20210609012810-9941 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210609012810-9941 --network calico-20210609012810-9941 --ip 192.168.58.2 --volume calico-20210609012810-9941:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45
I0609 01:40:38.853918 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Running}}
I0609 01:40:38.906203 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:38.959124 352096 cli_runner.go:115] Run: docker exec calico-20210609012810-9941 stat /var/lib/dpkg/alternatives/iptables
I0609 01:40:39.108798 352096 oci.go:278] the created container "calico-20210609012810-9941" has a running status.
I0609 01:40:39.108836 352096 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa...
I0609 01:40:39.198235 352096 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0609 01:40:39.602006 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:39.652085 352096 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0609 01:40:39.652109 352096 kic_runner.go:115] Args: [docker exec --privileged calico-20210609012810-9941 chown docker:docker /home/docker/.ssh/authorized_keys]
I0609 01:40:40.132328 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:40.865096 300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:42.865643 300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:41.950654 352096 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210609012810-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir: (3.801357977s)
I0609 01:40:41.950723 352096 kic.go:188] duration metric: took 3.801562 seconds to extract preloaded images to volume
I0609 01:40:41.950817 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:41.990470 352096 machine.go:88] provisioning docker machine ...
I0609 01:40:41.990506 352096 ubuntu.go:169] provisioning hostname "calico-20210609012810-9941"
I0609 01:40:41.990596 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:42.031665 352096 main.go:128] libmachine: Using SSH client type: native
I0609 01:40:42.031889 352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32985 <nil> <nil>}
I0609 01:40:42.031912 352096 main.go:128] libmachine: About to run SSH command:
sudo hostname calico-20210609012810-9941 && echo "calico-20210609012810-9941" | sudo tee /etc/hostname
I0609 01:40:42.168989 352096 main.go:128] libmachine: SSH cmd err, output: <nil>: calico-20210609012810-9941
I0609 01:40:42.169058 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:42.214838 352096 main.go:128] libmachine: Using SSH client type: native
I0609 01:40:42.214999 352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32985 <nil> <nil>}
I0609 01:40:42.215023 352096 main.go:128] libmachine: About to run SSH command:
if ! grep -xq '.*\scalico-20210609012810-9941' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210609012810-9941/g' /etc/hosts;
else
echo '127.0.1.1 calico-20210609012810-9941' | sudo tee -a /etc/hosts;
fi
fi
I0609 01:40:42.332932 352096 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0609 01:40:42.332992 352096 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
I0609 01:40:42.333032 352096 ubuntu.go:177] setting up certificates
I0609 01:40:42.333040 352096 provision.go:83] configureAuth start
I0609 01:40:42.333091 352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
I0609 01:40:42.372958 352096 provision.go:137] copyHostCerts
I0609 01:40:42.373013 352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
I0609 01:40:42.373030 352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
I0609 01:40:42.373084 352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
I0609 01:40:42.373174 352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
I0609 01:40:42.373185 352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
I0609 01:40:42.373208 352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
I0609 01:40:42.373272 352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
I0609 01:40:42.373298 352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
I0609 01:40:42.373324 352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
I0609 01:40:42.373372 352096 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.calico-20210609012810-9941 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20210609012810-9941]
I0609 01:40:42.470940 352096 provision.go:171] copyRemoteCerts
I0609 01:40:42.470996 352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0609 01:40:42.471030 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:42.516819 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:42.604293 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0609 01:40:42.620326 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0609 01:40:42.635125 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0609 01:40:42.650438 352096 provision.go:86] duration metric: configureAuth took 317.389022ms
I0609 01:40:42.650459 352096 ubuntu.go:193] setting minikube options for container-runtime
I0609 01:40:42.650643 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:42.690608 352096 main.go:128] libmachine: Using SSH client type: native
I0609 01:40:42.690768 352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32985 <nil> <nil>}
I0609 01:40:42.690789 352096 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0609 01:40:42.809400 352096 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
I0609 01:40:42.809436 352096 ubuntu.go:71] root file system type: overlay
I0609 01:40:42.809629 352096 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0609 01:40:42.809695 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:42.849952 352096 main.go:128] libmachine: Using SSH client type: native
I0609 01:40:42.850124 352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32985 <nil> <nil>}
I0609 01:40:42.850223 352096 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0609 01:40:42.982970 352096 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0609 01:40:42.983065 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:43.031885 352096 main.go:128] libmachine: Using SSH client type: native
I0609 01:40:43.032086 352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32985 <nil> <nil>}
I0609 01:40:43.032118 352096 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0609 01:40:43.625675 352096 main.go:128] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-09 01:40:42.981589018 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0609 01:40:43.625711 352096 machine.go:91] provisioned docker machine in 1.635218617s
I0609 01:40:43.625725 352096 client.go:171] LocalClient.Create took 6.574201593s
I0609 01:40:43.625748 352096 start.go:168] duration metric: libmachine.API.Create for "calico-20210609012810-9941" took 6.574278241s
I0609 01:40:43.625761 352096 start.go:267] post-start starting for "calico-20210609012810-9941" (driver="docker")
I0609 01:40:43.625768 352096 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0609 01:40:43.625839 352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0609 01:40:43.625883 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:43.667182 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:43.752939 352096 ssh_runner.go:149] Run: cat /etc/os-release
I0609 01:40:43.755722 352096 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0609 01:40:43.755749 352096 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0609 01:40:43.755763 352096 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0609 01:40:43.755771 352096 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0609 01:40:43.755788 352096 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
I0609 01:40:43.755837 352096 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
I0609 01:40:43.755931 352096 start.go:270] post-start completed in 130.162299ms
I0609 01:40:43.756175 352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
I0609 01:40:43.794853 352096 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json ...
I0609 01:40:43.795091 352096 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0609 01:40:43.795138 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:43.833691 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:43.917790 352096 start.go:129] duration metric: createHost completed in 6.868772218s
I0609 01:40:43.917824 352096 start.go:80] releasing machines lock for "calico-20210609012810-9941", held for 6.868947784s
I0609 01:40:43.917911 352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
I0609 01:40:43.958012 352096 ssh_runner.go:149] Run: systemctl --version
I0609 01:40:43.958067 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:43.958087 352096 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0609 01:40:43.958148 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:43.999990 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:44.000156 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:44.105048 352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0609 01:40:44.113782 352096 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0609 01:40:44.122327 352096 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0609 01:40:44.122397 352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0609 01:40:44.130910 352096 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0609 01:40:44.142773 352096 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0609 01:40:44.201078 352096 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0609 01:40:44.256269 352096 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0609 01:40:44.264833 352096 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0609 01:40:44.317328 352096 ssh_runner.go:149] Run: sudo systemctl start docker
I0609 01:40:44.325668 352096 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0609 01:40:40.492907 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:40.992189 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:41.493228 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:41.993005 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:42.492386 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:42.992261 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:43.493058 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:43.993022 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:44.492490 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:44.993036 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:44.373093 352096 out.go:197] * Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
I0609 01:40:44.373166 352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0609 01:40:44.410011 352096 ssh_runner.go:149] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0609 01:40:44.413077 352096 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0609 01:40:44.422262 352096 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.crt
I0609 01:40:44.422356 352096 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.key
I0609 01:40:44.422503 352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0609 01:40:44.422549 352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:40:44.461776 352096 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:40:44.461803 352096 docker.go:466] Images already preloaded, skipping extraction
I0609 01:40:44.461856 352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:40:44.498947 352096 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:40:44.498975 352096 cache_images.go:74] Images are preloaded, skipping loading
I0609 01:40:44.499029 352096 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0609 01:40:44.584207 352096 cni.go:93] Creating CNI manager for "calico"
I0609 01:40:44.584229 352096 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0609 01:40:44.584247 352096 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210609012810-9941 NodeName:calico-20210609012810-9941 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0609 01:40:44.584403 352096 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "calico-20210609012810-9941"
kubeletExtraArgs:
node-ip: 192.168.58.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
I0609 01:40:44.584487 352096 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20210609012810-9941 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
[Install]
config:
{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
I0609 01:40:44.584549 352096 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0609 01:40:44.591407 352096 binaries.go:44] Found k8s binaries, skipping transfer
I0609 01:40:44.591476 352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0609 01:40:44.597626 352096 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
I0609 01:40:44.609338 352096 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0609 01:40:44.620431 352096 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1885 bytes)
I0609 01:40:44.631725 352096 ssh_runner.go:149] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0609 01:40:44.634357 352096 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0609 01:40:44.642326 352096 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941 for IP: 192.168.58.2
I0609 01:40:44.642377 352096 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key
I0609 01:40:44.642394 352096 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key
I0609 01:40:44.642461 352096 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.key
I0609 01:40:44.642481 352096 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041
I0609 01:40:44.642488 352096 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0609 01:40:44.840681 352096 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 ...
I0609 01:40:44.840717 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041: {Name:mkfc84e07035095def340a1ef0c06b8c2f56c745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:44.840897 352096 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041 ...
I0609 01:40:44.840910 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041: {Name:mk3b1eccc9f0abe0f237561b0ecff13d04e9dd19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:44.840989 352096 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt
I0609 01:40:44.841051 352096 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key
I0609 01:40:44.841102 352096 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key
I0609 01:40:44.841112 352096 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt with IP's: []
I0609 01:40:44.915955 352096 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt ...
I0609 01:40:44.915989 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt: {Name:mkf48058b2fd1c7451a636bd94c7654745c05033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:44.916188 352096 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key ...
I0609 01:40:44.916206 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key: {Name:mke09647dda418d05401ddeb31cf7b4c662417a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:44.916415 352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem (1338 bytes)
W0609 01:40:44.916467 352096 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941_empty.pem, impossibly tiny 0 bytes
I0609 01:40:44.916486 352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem (1675 bytes)
I0609 01:40:44.916523 352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem (1082 bytes)
I0609 01:40:44.916559 352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem (1123 bytes)
I0609 01:40:44.916590 352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem (1679 bytes)
I0609 01:40:44.917800 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0609 01:40:44.937170 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0609 01:40:44.956373 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0609 01:40:44.974933 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0609 01:40:44.991731 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0609 01:40:45.008489 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0609 01:40:45.031606 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0609 01:40:45.047895 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0609 01:40:45.064667 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem --> /usr/share/ca-certificates/9941.pem (1338 bytes)
I0609 01:40:45.080936 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0609 01:40:45.096059 352096 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
I0609 01:40:45.107015 352096 ssh_runner.go:149] Run: openssl version
I0609 01:40:45.111407 352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0609 01:40:45.119189 352096 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0609 01:40:45.121891 352096 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jun 9 00:58 /usr/share/ca-certificates/minikubeCA.pem
I0609 01:40:45.121925 352096 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0609 01:40:45.126118 352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0609 01:40:45.132551 352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9941.pem && ln -fs /usr/share/ca-certificates/9941.pem /etc/ssl/certs/9941.pem"
I0609 01:40:45.138926 352096 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9941.pem
I0609 01:40:45.141619 352096 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jun 9 01:04 /usr/share/ca-certificates/9941.pem
I0609 01:40:45.141657 352096 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9941.pem
I0609 01:40:45.145814 352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9941.pem /etc/ssl/certs/51391683.0"
I0609 01:40:45.152149 352096 kubeadm.go:390] StartCluster: {Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0609 01:40:45.152257 352096 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0609 01:40:45.187288 352096 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0609 01:40:45.193888 352096 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0609 01:40:45.201487 352096 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0609 01:40:45.201538 352096 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0609 01:40:45.207661 352096 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0609 01:40:45.207713 352096 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0609 01:40:43.186787 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:46.229769 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:45.365532 300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:45.492939 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:45.992622 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:46.493059 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:46.992661 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:48.750771 344705 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.758074457s)
I0609 01:40:48.993021 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:49.269941 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:52.311061 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:51.493556 344705 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.500498227s)
I0609 01:40:51.992230 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:52.180627 344705 kubeadm.go:985] duration metric: took 19.939502771s to wait for elevateKubeSystemPrivileges.
I0609 01:40:52.180659 344705 kubeadm.go:392] StartCluster complete in 33.745162361s
I0609 01:40:52.180680 344705 settings.go:142] acquiring lock: {Name:mk8746ecf7d8ca6a3508d1e45e55db2314c0e73d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:52.180766 344705 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
I0609 01:40:52.182512 344705 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig: {Name:mk288d2c4fafd90028bf76db1824dfec28d92db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:52.757936 344705 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20210609012810-9941" rescaled to 1
I0609 01:40:52.758013 344705 start.go:214] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
I0609 01:40:52.759853 344705 out.go:170] * Verifying Kubernetes components...
I0609 01:40:52.758135 344705 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0609 01:40:52.759935 344705 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0609 01:40:52.758167 344705 addons.go:342] enableAddons start: toEnable=map[], additional=[]
I0609 01:40:52.760010 344705 addons.go:59] Setting storage-provisioner=true in profile "cilium-20210609012810-9941"
I0609 01:40:52.758404 344705 cache.go:108] acquiring lock: {Name:mk2dd9808d496cd84c38482eea9e354a60be2885 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0609 01:40:52.760030 344705 addons.go:59] Setting default-storageclass=true in profile "cilium-20210609012810-9941"
I0609 01:40:52.760049 344705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20210609012810-9941"
I0609 01:40:52.760062 344705 addons.go:135] Setting addon storage-provisioner=true in "cilium-20210609012810-9941"
W0609 01:40:52.760082 344705 addons.go:147] addon storage-provisioner should already be in state true
I0609 01:40:52.760090 344705 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 exists
I0609 01:40:52.760113 344705 host.go:66] Checking if "cilium-20210609012810-9941" exists ...
I0609 01:40:52.760111 344705 cache.go:97] cache image "minikube-local-cache-test:functional-20210609010438-9941" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941" took 1.718093ms
I0609 01:40:52.760126 344705 cache.go:81] save to tar file minikube-local-cache-test:functional-20210609010438-9941 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 succeeded
I0609 01:40:52.760140 344705 cache.go:88] Successfully saved all images to host disk.
I0609 01:40:52.760541 344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:52.760709 344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:52.761714 344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:50.469695 300573 pod_ready.go:92] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"True"
I0609 01:40:50.469731 300573 pod_ready.go:81] duration metric: took 16.612054385s waiting for pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace to be "Ready" ...
I0609 01:40:50.469746 300573 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97rr9" in "kube-system" namespace to be "Ready" ...
I0609 01:40:51.488708 300573 pod_ready.go:92] pod "kube-proxy-97rr9" in "kube-system" namespace has status "Ready":"True"
I0609 01:40:51.488734 300573 pod_ready.go:81] duration metric: took 1.018979544s waiting for pod "kube-proxy-97rr9" in "kube-system" namespace to be "Ready" ...
I0609 01:40:51.488744 300573 pod_ready.go:38] duration metric: took 17.633659357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0609 01:40:51.488765 300573 api_server.go:50] waiting for apiserver process to appear ...
I0609 01:40:51.488807 300573 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0609 01:40:51.520972 300573 api_server.go:70] duration metric: took 17.937884491s to wait for apiserver process to appear ...
I0609 01:40:51.520999 300573 api_server.go:86] waiting for apiserver healthz status ...
I0609 01:40:51.521011 300573 api_server.go:223] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0609 01:40:51.525448 300573 api_server.go:249] https://192.168.67.2:8443/healthz returned 200:
ok
I0609 01:40:51.526192 300573 api_server.go:139] control plane version: v1.14.0
I0609 01:40:51.526211 300573 api_server.go:129] duration metric: took 5.206469ms to wait for apiserver health ...
I0609 01:40:51.526219 300573 system_pods.go:43] waiting for kube-system pods to appear ...
I0609 01:40:51.528829 300573 system_pods.go:59] 4 kube-system pods found
I0609 01:40:51.528851 300573 system_pods.go:61] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.528856 300573 system_pods.go:61] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.528865 300573 system_pods.go:61] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:51.528871 300573 system_pods.go:61] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.528887 300573 system_pods.go:74] duration metric: took 2.66306ms to wait for pod list to return data ...
I0609 01:40:51.528896 300573 default_sa.go:34] waiting for default service account to be created ...
I0609 01:40:51.531122 300573 default_sa.go:45] found service account: "default"
I0609 01:40:51.531139 300573 default_sa.go:55] duration metric: took 2.23539ms for default service account to be created ...
I0609 01:40:51.531146 300573 system_pods.go:116] waiting for k8s-apps to be running ...
I0609 01:40:51.536460 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:51.536487 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.536494 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.536504 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:51.536517 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.536541 300573 retry.go:31] will retry after 214.282984ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:51.755301 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:51.755331 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.755339 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.755348 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:51.755355 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.755369 300573 retry.go:31] will retry after 293.852686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:52.053824 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:52.053857 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.053865 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.053880 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:52.053892 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.053908 300573 retry.go:31] will retry after 355.089487ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:52.413227 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:52.413262 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.413272 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.413282 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:52.413289 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.413304 300573 retry.go:31] will retry after 480.685997ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:52.898013 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:52.898051 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.898059 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.898071 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:52.898078 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.898093 300573 retry.go:31] will retry after 544.138839ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:53.446671 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:53.446706 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:53.446713 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:53.446722 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:53.446728 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:53.446742 300573 retry.go:31] will retry after 684.014074ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:52.840705 344705 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0609 01:40:52.840860 344705 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0609 01:40:52.840873 344705 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0609 01:40:52.840938 344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
I0609 01:40:52.820388 344705 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:40:52.841301 344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
I0609 01:40:52.823016 344705 addons.go:135] Setting addon default-storageclass=true in "cilium-20210609012810-9941"
W0609 01:40:52.841379 344705 addons.go:147] addon default-storageclass should already be in state true
I0609 01:40:52.841434 344705 host.go:66] Checking if "cilium-20210609012810-9941" exists ...
I0609 01:40:52.841999 344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:52.875619 344705 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0609 01:40:52.878520 344705 node_ready.go:35] waiting up to 5m0s for node "cilium-20210609012810-9941" to be "Ready" ...
I0609 01:40:52.883106 344705 node_ready.go:49] node "cilium-20210609012810-9941" has status "Ready":"True"
I0609 01:40:52.883125 344705 node_ready.go:38] duration metric: took 4.566542ms waiting for node "cilium-20210609012810-9941" to be "Ready" ...
I0609 01:40:52.883135 344705 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0609 01:40:52.901282 344705 pod_ready.go:78] waiting up to 5m0s for pod "cilium-2rdhk" in "kube-system" namespace to be "Ready" ...
I0609 01:40:52.905753 344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:52.913698 344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:52.924428 344705 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0609 01:40:52.924451 344705 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0609 01:40:52.924507 344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
I0609 01:40:52.985429 344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:53.093158 344705 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0609 01:40:53.182043 344705 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0609 01:40:53.354533 344705 start.go:725] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
I0609 01:40:53.354610 344705 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:40:53.354626 344705 docker.go:541] minikube-local-cache-test:functional-20210609010438-9941 wasn't preloaded
I0609 01:40:53.354641 344705 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210609010438-9941]
I0609 01:40:53.355651 344705 image.go:133] retrieving image: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:40:53.355676 344705 image.go:139] checking repository: index.docker.io/library/minikube-local-cache-test
I0609 01:40:53.588602 344705 out.go:170] * Enabled addons: storage-provisioner, default-storageclass
I0609 01:40:53.588639 344705 addons.go:344] enableAddons completed in 830.486904ms
W0609 01:40:54.204447 344705 image.go:146] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
I0609 01:40:54.204502 344705 image.go:147] short name: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:40:54.205330 344705 image.go:175] daemon lookup for minikube-local-cache-test:functional-20210609010438-9941: Error response from daemon: reference does not exist
W0609 01:40:54.817533 344705 image.go:185] authn lookup for minikube-local-cache-test:functional-20210609010438-9941 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:40:54.940307 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:55.379843 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:54.134198 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:54.134226 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:54.134231 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:54.134238 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:54.134242 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:54.134254 300573 retry.go:31] will retry after 1.039068543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:55.178626 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:55.178662 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:55.178669 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:55.178679 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:55.178691 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:55.178707 300573 retry.go:31] will retry after 1.02433744s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:56.206796 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:56.206822 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:56.206828 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:56.206835 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:56.206839 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:56.206851 300573 retry.go:31] will retry after 1.268973106s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:57.480720 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:57.480751 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:57.480759 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:57.480771 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:57.480778 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:57.480796 300573 retry.go:31] will retry after 1.733071555s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:55.410467 344705 image.go:189] remote lookup for minikube-local-cache-test:functional-20210609010438-9941: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:40:55.410515 344705 image.go:92] error retrieve Image minikube-local-cache-test:functional-20210609010438-9941 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:40:55.410544 344705 cache_images.go:106] "minikube-local-cache-test:functional-20210609010438-9941" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210609010438-9941
I0609 01:40:55.410583 344705 docker.go:236] Removing image: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:40:55.410638 344705 ssh_runner.go:149] Run: docker rmi minikube-local-cache-test:functional-20210609010438-9941
I0609 01:40:55.448411 344705 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:40:55.448506 344705 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:40:55.451714 344705 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941': No such file or directory
I0609 01:40:55.451745 344705 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941 (5120 bytes)
I0609 01:40:55.471575 344705 docker.go:203] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:40:55.471628 344705 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:40:55.762458 344705 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 from cache
I0609 01:40:55.762495 344705 cache_images.go:113] Successfully loaded all cached images
I0609 01:40:55.762502 344705 cache_images.go:82] LoadImages completed in 2.407848633s
I0609 01:40:55.762517 344705 cache_images.go:252] succeeded pushing to: cilium-20210609012810-9941
I0609 01:40:55.762522 344705 cache_images.go:253] failed pushing to:
I0609 01:40:57.446509 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:59.919287 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:00.317663 352096 out.go:197] - Generating certificates and keys ...
I0609 01:41:00.320816 352096 out.go:197] - Booting up control plane ...
I0609 01:41:00.323612 352096 out.go:197] - Configuring RBAC rules ...
I0609 01:41:00.325728 352096 cni.go:93] Creating CNI manager for "calico"
I0609 01:41:00.327397 352096 out.go:170] * Configuring Calico (Container Networking Interface) ...
I0609 01:41:00.327463 352096 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.7/kubectl ...
I0609 01:41:00.327482 352096 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22544 bytes)
I0609 01:41:00.355615 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0609 01:41:01.345873 352096 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0609 01:41:01.346015 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:01.346096 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl label nodes minikube.k8s.io/version=v1.21.0-beta.0 minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc minikube.k8s.io/name=calico-20210609012810-9941 minikube.k8s.io/updated_at=2021_06_09T01_41_01_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:58.423166 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:01.474794 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:59.218044 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:59.218071 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:59.218077 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:59.218084 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:59.218089 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:59.218101 300573 retry.go:31] will retry after 2.410580953s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:41:01.632429 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:41:01.632456 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:01.632462 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:01.632469 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:01.632476 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:01.632489 300573 retry.go:31] will retry after 3.437877504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:41:02.460409 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:04.920306 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:01.767984 352096 ops.go:34] apiserver oom_adj: -16
I0609 01:41:01.768084 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:02.480180 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:02.980220 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:03.480904 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:03.980208 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:04.480690 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:04.980710 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:05.480647 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:05.979985 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:06.480212 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:04.521744 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:05.073834 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:41:05.073863 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:05.073868 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:05.073876 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:05.073881 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:05.073895 300573 retry.go:31] will retry after 3.261655801s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:41:08.339005 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:41:08.339042 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:08.339049 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:08.339061 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:08.339067 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:08.339081 300573 retry.go:31] will retry after 4.086092664s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:41:07.419175 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:09.443670 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:06.980032 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:07.480282 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:07.980274 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:08.480263 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:08.980571 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:09.480813 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:09.980588 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:10.480840 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:10.980186 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:11.480965 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:07.580079 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:10.622741 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:13.117286 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:41:13.117320 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:13.117328 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:13.117340 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:13.117348 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:13.117364 300573 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:41:13.726560 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:11.980058 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:13.480528 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:13.980786 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:15.479870 352096 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.499049149s)
I0609 01:41:15.479969 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:16.480635 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:13.666259 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:16.715529 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:16.980322 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:17.480064 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:17.980779 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:18.071429 352096 kubeadm.go:985] duration metric: took 16.725453565s to wait for elevateKubeSystemPrivileges.
I0609 01:41:18.071462 352096 kubeadm.go:392] StartCluster complete in 32.919320287s
I0609 01:41:18.071483 352096 settings.go:142] acquiring lock: {Name:mk8746ecf7d8ca6a3508d1e45e55db2314c0e73d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:18.071570 352096 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
I0609 01:41:18.073757 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig: {Name:mk288d2c4fafd90028bf76db1824dfec28d92db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:18.664569 352096 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20210609012810-9941" rescaled to 1
I0609 01:41:18.664632 352096 start.go:214] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
I0609 01:41:18.664651 352096 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0609 01:41:18.664714 352096 addons.go:342] enableAddons start: toEnable=map[], additional=[]
I0609 01:41:18.666538 352096 out.go:170] * Verifying Kubernetes components...
I0609 01:41:18.664779 352096 addons.go:59] Setting storage-provisioner=true in profile "calico-20210609012810-9941"
I0609 01:41:18.666596 352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0609 01:41:18.666612 352096 addons.go:135] Setting addon storage-provisioner=true in "calico-20210609012810-9941"
W0609 01:41:18.666630 352096 addons.go:147] addon storage-provisioner should already be in state true
I0609 01:41:18.664791 352096 addons.go:59] Setting default-storageclass=true in profile "calico-20210609012810-9941"
I0609 01:41:18.666671 352096 host.go:66] Checking if "calico-20210609012810-9941" exists ...
I0609 01:41:18.666676 352096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20210609012810-9941"
I0609 01:41:18.664965 352096 cache.go:108] acquiring lock: {Name:mk2dd9808d496cd84c38482eea9e354a60be2885 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0609 01:41:18.666833 352096 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 exists
I0609 01:41:18.666855 352096 cache.go:97] cache image "minikube-local-cache-test:functional-20210609010438-9941" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941" took 1.89821ms
I0609 01:41:18.666869 352096 cache.go:81] save to tar file minikube-local-cache-test:functional-20210609010438-9941 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 succeeded
I0609 01:41:18.666879 352096 cache.go:88] Successfully saved all images to host disk.
I0609 01:41:18.667046 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:41:18.667251 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:41:18.667265 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:41:18.711328 352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:41:18.711376 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:41:16.464152 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:18.919739 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:18.722674 352096 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0609 01:41:18.722788 352096 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0609 01:41:18.722802 352096 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0609 01:41:18.722851 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:41:18.758518 352096 addons.go:135] Setting addon default-storageclass=true in "calico-20210609012810-9941"
W0609 01:41:18.758544 352096 addons.go:147] addon default-storageclass should already be in state true
I0609 01:41:18.758573 352096 host.go:66] Checking if "calico-20210609012810-9941" exists ...
I0609 01:41:18.759066 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:41:18.770750 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:41:18.794220 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:41:18.806700 352096 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0609 01:41:18.806724 352096 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0609 01:41:18.806770 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:41:18.861723 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:41:19.254824 352096 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0609 01:41:19.257472 352096 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0609 01:41:19.269050 352096 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0609 01:41:19.269206 352096 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:41:19.269224 352096 docker.go:541] minikube-local-cache-test:functional-20210609010438-9941 wasn't preloaded
I0609 01:41:19.269233 352096 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210609010438-9941]
I0609 01:41:19.270563 352096 node_ready.go:35] waiting up to 5m0s for node "calico-20210609012810-9941" to be "Ready" ...
I0609 01:41:19.270617 352096 image.go:133] retrieving image: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:41:19.270639 352096 image.go:139] checking repository: index.docker.io/library/minikube-local-cache-test
I0609 01:41:19.344594 352096 node_ready.go:49] node "calico-20210609012810-9941" has status "Ready":"True"
I0609 01:41:19.344625 352096 node_ready.go:38] duration metric: took 74.017948ms waiting for node "calico-20210609012810-9941" to be "Ready" ...
I0609 01:41:19.344637 352096 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0609 01:41:19.359631 352096 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace to be "Ready" ...
W0609 01:41:20.095801 352096 image.go:146] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
I0609 01:41:20.095863 352096 image.go:147] short name: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:41:20.096813 352096 image.go:175] daemon lookup for minikube-local-cache-test:functional-20210609010438-9941: Error response from daemon: reference does not exist
I0609 01:41:20.438848 352096 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18134229s)
I0609 01:41:20.438935 352096 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.169850353s)
I0609 01:41:20.438963 352096 start.go:725] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
I0609 01:41:20.441405 352096 out.go:170] * Enabled addons: default-storageclass, storage-provisioner
I0609 01:41:20.441438 352096 addons.go:344] enableAddons completed in 1.776732349s
W0609 01:41:20.710811 352096 image.go:185] authn lookup for minikube-local-cache-test:functional-20210609010438-9941 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:41:21.301766 352096 image.go:189] remote lookup for minikube-local-cache-test:functional-20210609010438-9941: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:41:21.301819 352096 image.go:92] error retrieve Image minikube-local-cache-test:functional-20210609010438-9941 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:41:21.301851 352096 cache_images.go:106] "minikube-local-cache-test:functional-20210609010438-9941" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210609010438-9941
I0609 01:41:21.301896 352096 docker.go:236] Removing image: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:41:21.301940 352096 ssh_runner.go:149] Run: docker rmi minikube-local-cache-test:functional-20210609010438-9941
I0609 01:41:21.448602 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:21.464097 352096 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:41:21.464209 352096 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:41:21.467662 352096 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941': No such file or directory
I0609 01:41:21.467695 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941 (5120 bytes)
I0609 01:41:21.553071 352096 docker.go:203] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:41:21.553158 352096 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:41:19.755463 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:19.524872 300573 system_pods.go:86] 7 kube-system pods found
I0609 01:41:19.524911 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:19.524921 300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Pending
I0609 01:41:19.524931 300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Pending
I0609 01:41:19.524938 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:19.524948 300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:19.524961 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:19.524978 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:19.524996 300573 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-apiserver, kube-controller-manager
I0609 01:41:21.419636 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:23.919505 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:21.913966 352096 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 from cache
I0609 01:41:21.914009 352096 cache_images.go:113] Successfully loaded all cached images
I0609 01:41:21.914025 352096 cache_images.go:82] LoadImages completed in 2.644783095s
I0609 01:41:21.914043 352096 cache_images.go:252] succeeded pushing to: calico-20210609012810-9941
I0609 01:41:21.914049 352096 cache_images.go:253] failed pushing to:
I0609 01:41:23.875804 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:25.876212 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:22.798808 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:25.839455 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:25.592272 300573 system_pods.go:86] 7 kube-system pods found
I0609 01:41:25.592298 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:25.592304 300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Pending
I0609 01:41:25.592308 300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Pending
I0609 01:41:25.592311 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:25.592317 300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:25.592325 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:25.592331 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:25.592342 300573 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-apiserver, kube-controller-manager
I0609 01:41:25.919767 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:28.419788 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:28.920252 344705 pod_ready.go:92] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:28.920277 344705 pod_ready.go:81] duration metric: took 36.018972007s waiting for pod "cilium-2rdhk" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.920288 344705 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.924675 344705 pod_ready.go:92] pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:28.924691 344705 pod_ready.go:81] duration metric: took 4.397091ms waiting for pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.924702 344705 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.929071 344705 pod_ready.go:92] pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:28.929091 344705 pod_ready.go:81] duration metric: took 4.382306ms waiting for pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.929102 344705 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.931060 344705 pod_ready.go:97] error getting pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-jv4pl" not found
I0609 01:41:28.931084 344705 pod_ready.go:81] duration metric: took 1.975143ms waiting for pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace to be "Ready" ...
E0609 01:41:28.931095 344705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-jv4pl" not found
I0609 01:41:28.931103 344705 pod_ready.go:78] waiting up to 5m0s for pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:27.876306 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:30.376138 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:28.884648 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:31.933672 329232 stop.go:59] stop err: Maximum number of retries (60) exceeded
I0609 01:41:31.933729 329232 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
I0609 01:41:31.934195 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
W0609 01:41:31.985166 329232 delete.go:135] deletehost failed: Docker machine "auto-20210609012809-9941" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0609 01:41:31.985255 329232 cli_runner.go:115] Run: docker container inspect -f {{.Id}} auto-20210609012809-9941
I0609 01:41:32.031852 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:32.081551 329232 cli_runner.go:115] Run: docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0"
W0609 01:41:32.125884 329232 cli_runner.go:162] docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0" returned with exit code 1
I0609 01:41:32.125930 329232 oci.go:632] error shutdown auto-20210609012809-9941: docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0": exit status 1
stdout:
stderr:
Error response from daemon: Container bc54bc9bf415ee2bb0df1bcad0aed4e971bd39991c0782ffae750733117660bd is not running
I0609 01:41:33.127009 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:33.188615 329232 oci.go:646] temporary error: container auto-20210609012809-9941 status is but expect it to be exited
I0609 01:41:33.188641 329232 oci.go:652] Successfully shutdown container auto-20210609012809-9941
I0609 01:41:33.188680 329232 cli_runner.go:115] Run: docker rm -f -v auto-20210609012809-9941
I0609 01:41:33.232875 329232 cli_runner.go:115] Run: docker container inspect -f {{.Id}} auto-20210609012809-9941
W0609 01:41:33.278916 329232 cli_runner.go:162] docker container inspect -f {{.Id}} auto-20210609012809-9941 returned with exit code 1
I0609 01:41:33.279004 329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0609 01:41:33.317124 329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0609 01:41:33.317184 329232 network_create.go:255] running [docker network inspect auto-20210609012809-9941] to gather additional debugging logs...
I0609 01:41:33.317205 329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941
W0609 01:41:33.354864 329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 returned with exit code 1
I0609 01:41:33.354894 329232 network_create.go:258] error running [docker network inspect auto-20210609012809-9941]: docker network inspect auto-20210609012809-9941: exit status 1
stdout:
[]
stderr:
Error: No such network: auto-20210609012809-9941
I0609 01:41:33.354910 329232 network_create.go:260] output of [docker network inspect auto-20210609012809-9941]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: auto-20210609012809-9941
** /stderr **
W0609 01:41:33.355033 329232 delete.go:139] delete failed (probably ok) <nil>
I0609 01:41:33.355043 329232 fix.go:120] Sleeping 1 second for extra luck!
I0609 01:41:34.355909 329232 start.go:126] createHost starting for "" (driver="docker")
I0609 01:41:30.941410 344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:32.942019 344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:34.942818 344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:32.377229 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:34.876436 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:34.358151 329232 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0609 01:41:34.358255 329232 start.go:160] libmachine.API.Create for "auto-20210609012809-9941" (driver="docker")
I0609 01:41:34.358292 329232 client.go:168] LocalClient.Create starting
I0609 01:41:34.358357 329232 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem
I0609 01:41:34.358386 329232 main.go:128] libmachine: Decoding PEM data...
I0609 01:41:34.358404 329232 main.go:128] libmachine: Parsing certificate...
I0609 01:41:34.358508 329232 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem
I0609 01:41:34.358532 329232 main.go:128] libmachine: Decoding PEM data...
I0609 01:41:34.358541 329232 main.go:128] libmachine: Parsing certificate...
I0609 01:41:34.358756 329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0609 01:41:34.402255 329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0609 01:41:34.402349 329232 network_create.go:255] running [docker network inspect auto-20210609012809-9941] to gather additional debugging logs...
I0609 01:41:34.402373 329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941
W0609 01:41:34.447755 329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 returned with exit code 1
I0609 01:41:34.447782 329232 network_create.go:258] error running [docker network inspect auto-20210609012809-9941]: docker network inspect auto-20210609012809-9941: exit status 1
stdout:
[]
stderr:
Error: No such network: auto-20210609012809-9941
I0609 01:41:34.447793 329232 network_create.go:260] output of [docker network inspect auto-20210609012809-9941]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: auto-20210609012809-9941
** /stderr **
I0609 01:41:34.447829 329232 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0609 01:41:34.487524 329232 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3efa0710be1e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6f:c7:95:89}}
I0609 01:41:34.488287 329232 network.go:215] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-494a1c72530c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:2d:51:70:a3}}
I0609 01:41:34.489047 329232 network.go:215] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-3b40e12707af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ac:37:f7:3a}}
I0609 01:41:34.489905 329232 network.go:263] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000136218 192.168.76.0:0xc000408548] misses:0}
I0609 01:41:34.489944 329232 network.go:210] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0609 01:41:34.489977 329232 network_create.go:106] attempt to create docker network auto-20210609012809-9941 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0609 01:41:34.490049 329232 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210609012809-9941
I0609 01:41:34.563866 329232 network_create.go:90] docker network auto-20210609012809-9941 192.168.76.0/24 created
I0609 01:41:34.563896 329232 kic.go:106] calculated static IP "192.168.76.2" for the "auto-20210609012809-9941" container
I0609 01:41:34.563950 329232 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0609 01:41:34.605010 329232 cli_runner.go:115] Run: docker volume create auto-20210609012809-9941 --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --label created_by.minikube.sigs.k8s.io=true
I0609 01:41:34.642891 329232 oci.go:102] Successfully created a docker volume auto-20210609012809-9941
I0609 01:41:34.642974 329232 cli_runner.go:115] Run: docker run --rm --name auto-20210609012809-9941-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --entrypoint /usr/bin/test -v auto-20210609012809-9941:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib
I0609 01:41:35.363820 329232 oci.go:106] Successfully prepared a docker volume auto-20210609012809-9941
W0609 01:41:35.363866 329232 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0609 01:41:35.363875 329232 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0609 01:41:35.363883 329232 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0609 01:41:35.363916 329232 kic.go:179] Starting extracting preloaded images to volume ...
I0609 01:41:35.363930 329232 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0609 01:41:35.363995 329232 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210609012809-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir
I0609 01:41:35.467993 329232 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210609012809-9941 --name auto-20210609012809-9941 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210609012809-9941 --network auto-20210609012809-9941 --ip 192.168.76.2 --volume auto-20210609012809-9941:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45
I0609 01:41:35.995981 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Running}}
I0609 01:41:36.052103 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:36.105861 329232 cli_runner.go:115] Run: docker exec auto-20210609012809-9941 stat /var/lib/dpkg/alternatives/iptables
I0609 01:41:36.272972 329232 oci.go:278] the created container "auto-20210609012809-9941" has a running status.
I0609 01:41:36.273013 329232 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa...
I0609 01:41:36.425757 329232 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0609 01:41:36.825610 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:36.868189 329232 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0609 01:41:36.868214 329232 kic_runner.go:115] Args: [docker exec --privileged auto-20210609012809-9941 chown docker:docker /home/docker/.ssh/authorized_keys]
I0609 01:41:36.102263 300573 system_pods.go:86] 8 kube-system pods found
I0609 01:41:36.102300 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102308 300573 system_pods.go:89] "etcd-old-k8s-version-20210609012901-9941" [d1de6264-c8c3-11eb-a78f-02427f02d9a2] Pending
I0609 01:41:36.102315 300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102323 300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102329 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102336 300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102347 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:36.102364 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102381 300573 retry.go:31] will retry after 12.194240946s: missing components: etcd
I0609 01:41:37.093269 344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:39.442809 344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:39.940516 344705 pod_ready.go:92] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:39.940545 344705 pod_ready.go:81] duration metric: took 11.009433469s waiting for pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:39.940560 344705 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:39.944617 344705 pod_ready.go:92] pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:39.944633 344705 pod_ready.go:81] duration metric: took 4.066455ms waiting for pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:39.944642 344705 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:37.080706 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:39.379466 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:41.383974 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:39.584397 329232 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210609012809-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir: (4.220346647s)
I0609 01:41:39.584427 329232 kic.go:188] duration metric: took 4.220510 seconds to extract preloaded images to volume
I0609 01:41:39.584497 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:39.635769 329232 machine.go:88] provisioning docker machine ...
I0609 01:41:39.635827 329232 ubuntu.go:169] provisioning hostname "auto-20210609012809-9941"
I0609 01:41:39.635904 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:39.684460 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:39.684645 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:39.684660 329232 main.go:128] libmachine: About to run SSH command:
sudo hostname auto-20210609012809-9941 && echo "auto-20210609012809-9941" | sudo tee /etc/hostname
I0609 01:41:39.841506 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: auto-20210609012809-9941
I0609 01:41:39.841577 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:39.885725 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:39.885870 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:39.885889 329232 main.go:128] libmachine: About to run SSH command:
if ! grep -xq '.*\sauto-20210609012809-9941' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210609012809-9941/g' /etc/hosts;
else
echo '127.0.1.1 auto-20210609012809-9941' | sudo tee -a /etc/hosts;
fi
fi
I0609 01:41:40.009081 329232 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0609 01:41:40.009113 329232 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
I0609 01:41:40.009136 329232 ubuntu.go:177] setting up certificates
I0609 01:41:40.009147 329232 provision.go:83] configureAuth start
I0609 01:41:40.009201 329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
I0609 01:41:40.054568 329232 provision.go:137] copyHostCerts
I0609 01:41:40.054639 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
I0609 01:41:40.054650 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
I0609 01:41:40.054702 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
I0609 01:41:40.054772 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
I0609 01:41:40.054816 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
I0609 01:41:40.054836 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
I0609 01:41:40.054888 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
I0609 01:41:40.054896 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
I0609 01:41:40.054916 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
I0609 01:41:40.054956 329232 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.auto-20210609012809-9941 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210609012809-9941]
I0609 01:41:40.199140 329232 provision.go:171] copyRemoteCerts
I0609 01:41:40.199207 329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0609 01:41:40.199267 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:40.240189 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:40.339747 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0609 01:41:40.358551 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0609 01:41:40.377700 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
I0609 01:41:40.396157 329232 provision.go:86] duration metric: configureAuth took 386.999034ms
I0609 01:41:40.396180 329232 ubuntu.go:193] setting minikube options for container-runtime
I0609 01:41:40.396396 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:40.437678 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:40.437928 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:40.437947 329232 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0609 01:41:40.565938 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
I0609 01:41:40.565966 329232 ubuntu.go:71] root file system type: overlay
I0609 01:41:40.566224 329232 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0609 01:41:40.566318 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:40.609110 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:40.609254 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:40.609318 329232 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0609 01:41:40.742784 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0609 01:41:40.742865 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:40.799645 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:40.799898 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:40.799934 329232 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0609 01:41:41.471089 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-09 01:41:40.733754700 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0609 01:41:41.471128 329232 machine.go:91] provisioned docker machine in 1.835332676s
I0609 01:41:41.471143 329232 client.go:171] LocalClient.Create took 7.112842351s
I0609 01:41:41.471164 329232 start.go:168] duration metric: libmachine.API.Create for "auto-20210609012809-9941" took 7.112906767s
I0609 01:41:41.471179 329232 start.go:267] post-start starting for "auto-20210609012809-9941" (driver="docker")
I0609 01:41:41.471186 329232 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0609 01:41:41.471252 329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0609 01:41:41.471302 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:41.519729 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:41.609111 329232 ssh_runner.go:149] Run: cat /etc/os-release
I0609 01:41:41.611701 329232 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0609 01:41:41.611732 329232 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0609 01:41:41.611740 329232 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0609 01:41:41.611745 329232 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0609 01:41:41.611753 329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
I0609 01:41:41.611793 329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
I0609 01:41:41.611879 329232 start.go:270] post-start completed in 140.693775ms
I0609 01:41:41.612136 329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
I0609 01:41:41.660654 329232 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/config.json ...
I0609 01:41:41.660931 329232 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0609 01:41:41.660996 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:41.708265 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:41.793790 329232 start.go:129] duration metric: createHost completed in 7.437849081s
I0609 01:41:41.793878 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
W0609 01:41:41.834734 329232 fix.go:134] unexpected machine state, will restart: <nil>
I0609 01:41:41.834764 329232 machine.go:88] provisioning docker machine ...
I0609 01:41:41.834786 329232 ubuntu.go:169] provisioning hostname "auto-20210609012809-9941"
I0609 01:41:41.834833 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:41.879476 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:41.879641 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:41.879661 329232 main.go:128] libmachine: About to run SSH command:
sudo hostname auto-20210609012809-9941 && echo "auto-20210609012809-9941" | sudo tee /etc/hostname
I0609 01:41:42.011151 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: auto-20210609012809-9941
I0609 01:41:42.011225 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:42.061407 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:42.061641 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:42.061675 329232 main.go:128] libmachine: About to run SSH command:
if ! grep -xq '.*\sauto-20210609012809-9941' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210609012809-9941/g' /etc/hosts;
else
echo '127.0.1.1 auto-20210609012809-9941' | sudo tee -a /etc/hosts;
fi
fi
I0609 01:41:42.184948 329232 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0609 01:41:42.184977 329232 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
I0609 01:41:42.185001 329232 ubuntu.go:177] setting up certificates
I0609 01:41:42.185011 329232 provision.go:83] configureAuth start
I0609 01:41:42.185062 329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
I0609 01:41:42.223424 329232 provision.go:137] copyHostCerts
I0609 01:41:42.223473 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
I0609 01:41:42.223480 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
I0609 01:41:42.223524 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
I0609 01:41:42.223592 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
I0609 01:41:42.223605 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
I0609 01:41:42.223629 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
I0609 01:41:42.223679 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
I0609 01:41:42.223689 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
I0609 01:41:42.223706 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
I0609 01:41:42.223802 329232 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.auto-20210609012809-9941 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210609012809-9941]
I0609 01:41:42.486214 329232 provision.go:171] copyRemoteCerts
I0609 01:41:42.486276 329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0609 01:41:42.486327 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:42.526157 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:42.612850 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0609 01:41:42.630046 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0609 01:41:42.647341 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
I0609 01:41:42.663823 329232 provision.go:86] duration metric: configureAuth took 478.797993ms
I0609 01:41:42.663855 329232 ubuntu.go:193] setting minikube options for container-runtime
I0609 01:41:42.664049 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:42.708962 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:42.709147 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:42.709164 329232 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0609 01:41:42.837104 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
I0609 01:41:42.837131 329232 ubuntu.go:71] root file system type: overlay
I0609 01:41:42.837293 329232 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0609 01:41:42.837345 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:42.884564 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:42.884726 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:42.884819 329232 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0609 01:41:43.017785 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0609 01:41:43.017862 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:43.058769 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:43.058909 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:43.058927 329232 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0609 01:41:43.180717 329232 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0609 01:41:43.180750 329232 machine.go:91] provisioned docker machine in 1.345979023s
I0609 01:41:43.180763 329232 start.go:267] post-start starting for "auto-20210609012809-9941" (driver="docker")
I0609 01:41:43.180773 329232 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0609 01:41:43.180829 329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0609 01:41:43.180871 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:43.220933 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:43.308831 329232 ssh_runner.go:149] Run: cat /etc/os-release
I0609 01:41:43.311629 329232 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0609 01:41:43.311653 329232 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0609 01:41:43.311664 329232 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0609 01:41:43.311671 329232 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0609 01:41:43.311681 329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
I0609 01:41:43.311732 329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
I0609 01:41:43.311850 329232 start.go:270] post-start completed in 131.0789ms
I0609 01:41:43.311895 329232 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0609 01:41:43.311938 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:43.351864 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:43.439589 329232 fix.go:57] fixHost completed within 3m18.46145985s
I0609 01:41:43.439614 329232 start.go:80] releasing machines lock for "auto-20210609012809-9941", held for 3m18.461506998s
I0609 01:41:43.439689 329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
I0609 01:41:43.480908 329232 ssh_runner.go:149] Run: sudo service containerd status
I0609 01:41:43.480953 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:43.480998 329232 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0609 01:41:43.481050 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:43.523337 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:43.523672 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:43.625901 329232 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0609 01:41:43.634199 329232 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0609 01:41:43.634259 329232 ssh_runner.go:149] Run: sudo service crio status
I0609 01:41:43.651967 329232 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0609 01:41:43.663538 329232 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0609 01:41:43.671774 329232 ssh_runner.go:149] Run: sudo service docker status
I0609 01:41:43.685805 329232 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0609 01:41:41.955318 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:44.454390 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:43.733795 329232 out.go:197] * Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
I0609 01:41:43.733887 329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0609 01:41:43.781233 329232 ssh_runner.go:149] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0609 01:41:43.784669 329232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0609 01:41:43.794580 329232 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.crt
I0609 01:41:43.794703 329232 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.key
I0609 01:41:43.794837 329232 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0609 01:41:43.794899 329232 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:41:43.836439 329232 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:41:43.836465 329232 docker.go:466] Images already preloaded, skipping extraction
I0609 01:41:43.836518 329232 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:41:43.874900 329232 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:41:43.874929 329232 cache_images.go:74] Images are preloaded, skipping loading
I0609 01:41:43.874987 329232 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0609 01:41:43.959341 329232 cni.go:93] Creating CNI manager for ""
I0609 01:41:43.959363 329232 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0609 01:41:43.959373 329232 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0609 01:41:43.959385 329232 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210609012809-9941 NodeName:auto-20210609012809-9941 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0609 01:41:43.959528 329232 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "auto-20210609012809-9941"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
I0609 01:41:43.959623 329232 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=auto-20210609012809-9941 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.7 ClusterName:auto-20210609012809-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0609 01:41:43.959678 329232 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0609 01:41:43.966644 329232 binaries.go:44] Found k8s binaries, skipping transfer
I0609 01:41:43.966767 329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
I0609 01:41:43.973306 329232 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
I0609 01:41:43.985377 329232 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0609 01:41:43.996832 329232 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1883 bytes)
I0609 01:41:44.008194 329232 ssh_runner.go:316] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
I0609 01:41:44.019580 329232 ssh_runner.go:316] scp memory --> /etc/init.d/kubelet (839 bytes)
I0609 01:41:44.031187 329232 ssh_runner.go:149] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0609 01:41:44.033902 329232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0609 01:41:44.042089 329232 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941 for IP: 192.168.76.2
I0609 01:41:44.042136 329232 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key
I0609 01:41:44.042171 329232 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key
I0609 01:41:44.042229 329232 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.key
I0609 01:41:44.042250 329232 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25
I0609 01:41:44.042257 329232 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0609 01:41:44.226573 329232 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 ...
I0609 01:41:44.226606 329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25: {Name:mk90ec242a66bfd79902e518464ceb62421bad6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:44.226771 329232 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25 ...
I0609 01:41:44.226783 329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25: {Name:mkfae0a3bd896dd88f44a8261ced590d5cf2eaf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:44.226857 329232 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt
I0609 01:41:44.226912 329232 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key
I0609 01:41:44.226968 329232 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key
I0609 01:41:44.226982 329232 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt with IP's: []
I0609 01:41:44.493832 329232 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt ...
I0609 01:41:44.493863 329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt: {Name:mkb1a9418c2d79591044d594bd7bb611a67d607c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:44.494045 329232 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key ...
I0609 01:41:44.494060 329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key: {Name:mkadb2ec9513a5b1c87d24f9a0d9353126c956ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:44.494231 329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem (1338 bytes)
W0609 01:41:44.494272 329232 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941_empty.pem, impossibly tiny 0 bytes
I0609 01:41:44.494299 329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem (1675 bytes)
I0609 01:41:44.494326 329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem (1082 bytes)
I0609 01:41:44.494386 329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem (1123 bytes)
I0609 01:41:44.494417 329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem (1679 bytes)
I0609 01:41:44.495301 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0609 01:41:44.513759 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0609 01:41:44.556375 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0609 01:41:44.574638 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0609 01:41:44.590891 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0609 01:41:44.607761 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0609 01:41:44.624984 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0609 01:41:44.641979 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0609 01:41:44.661420 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem --> /usr/share/ca-certificates/9941.pem (1338 bytes)
I0609 01:41:44.679420 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0609 01:41:44.697286 329232 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
I0609 01:41:44.709772 329232 ssh_runner.go:149] Run: openssl version
I0609 01:41:44.714441 329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9941.pem && ln -fs /usr/share/ca-certificates/9941.pem /etc/ssl/certs/9941.pem"
I0609 01:41:44.721420 329232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9941.pem
I0609 01:41:44.724999 329232 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jun 9 01:04 /usr/share/ca-certificates/9941.pem
I0609 01:41:44.725051 329232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9941.pem
I0609 01:41:44.730221 329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9941.pem /etc/ssl/certs/51391683.0"
I0609 01:41:44.738018 329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0609 01:41:44.744990 329232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0609 01:41:44.747847 329232 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jun 9 00:58 /usr/share/ca-certificates/minikubeCA.pem
I0609 01:41:44.747885 329232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0609 01:41:44.752327 329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0609 01:41:44.759007 329232 kubeadm.go:390] StartCluster: {Name:auto-20210609012809-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:auto-20210609012809-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0609 01:41:44.759106 329232 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0609 01:41:44.801843 329232 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0609 01:41:44.810329 329232 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0609 01:41:44.818129 329232 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0609 01:41:44.818183 329232 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0609 01:41:44.825259 329232 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0609 01:41:44.825307 329232 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0609 01:41:43.875536 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:46.376745 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:45.588110 329232 out.go:197] - Generating certificates and keys ...
I0609 01:41:48.300953 300573 system_pods.go:86] 8 kube-system pods found
I0609 01:41:48.300985 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.300993 300573 system_pods.go:89] "etcd-old-k8s-version-20210609012901-9941" [d1de6264-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301000 300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301006 300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301013 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301020 300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301031 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:48.301043 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301053 300573 system_pods.go:126] duration metric: took 56.76990207s to wait for k8s-apps to be running ...
I0609 01:41:48.301068 300573 system_svc.go:44] waiting for kubelet service to be running ....
I0609 01:41:48.301114 300573 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0609 01:41:48.310381 300573 system_svc.go:56] duration metric: took 9.307261ms WaitForService to wait for kubelet.
I0609 01:41:48.310405 300573 kubeadm.go:547] duration metric: took 1m14.727322076s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0609 01:41:48.310424 300573 node_conditions.go:102] verifying NodePressure condition ...
I0609 01:41:48.312372 300573 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
I0609 01:41:48.312391 300573 node_conditions.go:123] node cpu capacity is 8
I0609 01:41:48.312404 300573 node_conditions.go:105] duration metric: took 1.974952ms to run NodePressure ...
I0609 01:41:48.312415 300573 start.go:219] waiting for startup goroutines ...
I0609 01:41:48.356569 300573 start.go:463] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
I0609 01:41:48.358565 300573 out.go:170]
W0609 01:41:48.358730 300573 out.go:235] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
I0609 01:41:48.360236 300573 out.go:170] - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
I0609 01:41:48.361792 300573 out.go:170] * Done! kubectl is now configured to use "old-k8s-version-20210609012901-9941" cluster and "default" namespace by default
I0609 01:41:46.954352 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:48.955130 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:47.875252 352096 pod_ready.go:92] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:47.875281 352096 pod_ready.go:81] duration metric: took 28.515609073s waiting for pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace to be "Ready" ...
I0609 01:41:47.875297 352096 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-8bhjk" in "kube-system" namespace to be "Ready" ...
I0609 01:41:49.886712 352096 pod_ready.go:92] pod "calico-node-8bhjk" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:49.886740 352096 pod_ready.go:81] duration metric: took 2.011435025s waiting for pod "calico-node-8bhjk" in "kube-system" namespace to be "Ready" ...
I0609 01:41:49.886752 352096 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace to be "Ready" ...
I0609 01:41:47.864552 329232 out.go:197] - Booting up control plane ...
I0609 01:41:50.955197 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:53.456163 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:51.896789 352096 pod_ready.go:92] pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:51.896811 352096 pod_ready.go:81] duration metric: took 2.010052283s waiting for pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace to be "Ready" ...
I0609 01:41:51.896821 352096 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-qc224" in "kube-system" namespace to be "Ready" ...
I0609 01:41:51.898882 352096 pod_ready.go:97] error getting pod "coredns-74ff55c5b-qc224" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-qc224" not found
I0609 01:41:51.898909 352096 pod_ready.go:81] duration metric: took 2.080404ms waiting for pod "coredns-74ff55c5b-qc224" in "kube-system" namespace to be "Ready" ...
E0609 01:41:51.898919 352096 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-qc224" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-qc224" not found
I0609 01:41:51.898928 352096 pod_ready.go:78] waiting up to 5m0s for pod "etcd-calico-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:53.907845 352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:55.911876 352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:55.954929 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:57.955126 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:59.956675 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:58.408965 352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:42:00.909845 352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:57.536931 329232 out.go:197] - Configuring RBAC rules ...
I0609 01:41:57.950447 329232 cni.go:93] Creating CNI manager for ""
I0609 01:41:57.950472 329232 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0609 01:41:57.950504 329232 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0609 01:41:57.950565 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:57.950588 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl label nodes minikube.k8s.io/version=v1.21.0-beta.0 minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc minikube.k8s.io/name=auto-20210609012809-9941 minikube.k8s.io/updated_at=2021_06_09T01_41_57_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:58.270674 329232 ops.go:34] apiserver oom_adj: -16
I0609 01:41:58.270873 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:58.834789 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:59.334848 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:59.834836 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:42:00.334592 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:42:00.835312 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:42:01.335240 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:42:01.834799 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:42:02.334849 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
*
* ==> Docker <==
* -- Logs begin at Wed 2021-06-09 01:34:39 UTC, end at Wed 2021-06-09 01:42:03 UTC. --
Jun 09 01:40:02 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:02.779605017Z" level=info msg="ignoring event" container=cc0aca83efeca0d2b5a6380f0035838137a5ddede617bb12397795175054b95c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.115851734Z" level=info msg="ignoring event" container=5e67ef29fd782e6882093cefc8d1b2e4e6502289a8aab7eb602baa78ff03d4df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.244359054Z" level=info msg="ignoring event" container=647284240c9b3ff26c1e5d787021349e374f04b87d9f0c78f0972878ca393ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.376184625Z" level=info msg="ignoring event" container=8a1abb294bc93b7aeb07164f4e6a549e477648e117418f2e94e2b62b742a603f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.503253921Z" level=info msg="ignoring event" container=a8f1d2a6258c19eb81fe707363ba95a59689f2623e07e372b5f44056f81b71b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.655460364Z" level=info msg="ignoring event" container=0a42e38b95e96fac8c84fbd6415b07279c3f7b4dc175292ee03bf72f93504bff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.868060101Z" level=info msg="ignoring event" container=8f37f3879958d7bcfb1fb37da48178584862829d0f9ab46e57d49320f37fc3f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:04 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:04.043079624Z" level=info msg="ignoring event" container=83d747333959a40a15d16276795b19088263280ab507d0e39ebf3009f9cd7290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:04 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:04.194657529Z" level=info msg="ignoring event" container=76c2df28bafa15f4875a399fd3f8bde03a6e76c0e021ffe56eb96ee35045923f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:36 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:36.611806519Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jun 09 01:40:37 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:37.093237111Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
Jun 09 01:40:37 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:37.256429752Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.432301024Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.432343163Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.433989922Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.749379613Z" level=info msg="ignoring event" container=209b2f1f12c840e229b4ae712cd7def2451c3e705cd6cf899ed05d4cae0c0929 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:43 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:43.034860759Z" level=info msg="ignoring event" container=e15298565a01a44ba2e81fbb337da50279e879415a5091222be3a5e36aee08d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.032186534Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.032222718Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.041807409Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:01 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:01.346826619Z" level=info msg="ignoring event" container=417a2459ca5d2c0a4e1befd352a48e44dc91fb4015fe574d929d8c1097ce09cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.038495294Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.038537670Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.040714461Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:34 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:34.345802355Z" level=info msg="ignoring event" container=0a878f155b99161e7c0c238df1d2ea55fb150f549896a43282d60c2825d2e0ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
0a878f155b991 a90209bb39e3d 29 seconds ago Exited dashboard-metrics-scraper 3 7b28bd8313edd
9230420d066a0 9a07b5b4bfac0 About a minute ago Running kubernetes-dashboard 0 52cb0877bbe76
80656451acc2e eb516548c180f About a minute ago Running coredns 0 b82c08bb91986
d27ec4783cae5 6e38f40d628db About a minute ago Running storage-provisioner 0 3c840dfa16845
ef3565ebed501 5cd54e388abaf About a minute ago Running kube-proxy 0 facebb8dc382e
15294a1b99e50 00638a24688b0 About a minute ago Running kube-scheduler 0 9113a9c371341
76559266dc96c b95b1efa0436b About a minute ago Running kube-controller-manager 0 5c8b321c5839a
557ff658123d4 2c4adeb21b4ff About a minute ago Running etcd 0 4d98c28eb4819
7435c96f89723 ecf910f40d6e0 About a minute ago Running kube-apiserver 0 553d498b0da82
*
* ==> coredns [80656451acc2] <==
* .:53
2021-06-09T01:40:37.071Z [INFO] CoreDNS-1.3.1
2021-06-09T01:40:37.071Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2021-06-09T01:40:37.071Z [INFO] plugin/reload: Running configuration MD5 = d7336ec3b7f1205cfa0fef85b62c291b
*
* ==> describe nodes <==
* Name: old-k8s-version-20210609012901-9941
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=old-k8s-version-20210609012901-9941
kubernetes.io/os=linux
minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc
minikube.k8s.io/name=old-k8s-version-20210609012901-9941
minikube.k8s.io/updated_at=2021_06_09T01_40_17_0700
minikube.k8s.io/version=v1.21.0-beta.0
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 09 Jun 2021 01:40:13 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 09 Jun 2021 01:41:13 +0000 Wed, 09 Jun 2021 01:40:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 09 Jun 2021 01:41:13 +0000 Wed, 09 Jun 2021 01:40:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 09 Jun 2021 01:41:13 +0000 Wed, 09 Jun 2021 01:40:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 09 Jun 2021 01:41:13 +0000 Wed, 09 Jun 2021 01:40:08 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.67.2
Hostname: old-k8s-version-20210609012901-9941
Capacity:
cpu: 8
ephemeral-storage: 309568300Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32951376Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 309568300Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32951376Ki
pods: 110
System Info:
Machine ID: b77ec962e3734760b1e756ffc5e83152
System UUID: fcb82c90-e30d-41cf-83d7-0b244092491c
Boot ID: e08f76ce-1642-432a-8e61-95aaa19183a7
Kernel Version: 4.9.0-15-amd64
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.14.0
Kube-Proxy Version: v1.14.0
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (10 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-fb8b8dccf-ctgrx 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 91s
kube-system etcd-old-k8s-version-20210609012901-9941 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 30s
kube-system kube-apiserver-old-k8s-version-20210609012901-9941 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 46s
kube-system kube-controller-manager-old-k8s-version-20210609012901-9941 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 45s
kube-system kube-proxy-97rr9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 91s
kube-system kube-scheduler-old-k8s-version-20210609012901-9941 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 48s
kube-system metrics-server-8546d8b77b-lqx7b 100m (1%!)(MISSING) 0 (0%!)(MISSING) 300Mi (0%!)(MISSING) 0 (0%!)(MISSING) 87s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 89s
kubernetes-dashboard dashboard-metrics-scraper-5b494cc544-529qb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 88s
kubernetes-dashboard kubernetes-dashboard-5d8978d65d-5c7t7 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 88s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 370Mi (1%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 116s kubelet, old-k8s-version-20210609012901-9941 Starting kubelet.
Normal NodeHasSufficientMemory 116s (x8 over 116s) kubelet, old-k8s-version-20210609012901-9941 Node old-k8s-version-20210609012901-9941 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 116s (x8 over 116s) kubelet, old-k8s-version-20210609012901-9941 Node old-k8s-version-20210609012901-9941 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 116s (x7 over 116s) kubelet, old-k8s-version-20210609012901-9941 Node old-k8s-version-20210609012901-9941 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 116s kubelet, old-k8s-version-20210609012901-9941 Updated Node Allocatable limit across pods
Normal Starting 88s kube-proxy, old-k8s-version-20210609012901-9941 Starting kube-proxy.
*
* ==> dmesg <==
* [ +1.658653] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff ce 5c c6 1f 63 8a 08 06 .......\..c...
[ +0.004022] IPv4: martian source 10.85.0.5 from 10.85.0.5, on dev eth0
[ +0.000001] ll header: 00000000: ff ff ff ff ff ff 0e 5d 4b c1 e0 ed 08 06 .......]K.....
[ +2.140856] IPv4: martian source 10.85.0.6 from 10.85.0.6, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 3e a3 2b db cb b6 08 06 ......>.+.....
[ +0.147751] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 9a f2 40 59 da 87 08 06 ........@Y....
[ +2.083360] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0
[ +0.000001] ll header: 00000000: ff ff ff ff ff ff 56 9d 71 18 33 dd 08 06 ......V.q.3...
[ +0.000616] IPv4: martian source 10.85.0.9 from 10.85.0.9, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 8e 8d b3 62 b0 07 08 06 .........b....
[ +1.714381] IPv4: martian source 10.85.0.10 from 10.85.0.10, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e d1 b5 da bf 05 08 06 ..............
[ +0.003822] IPv4: martian source 10.85.0.11 from 10.85.0.11, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 92 3a 5c 13 9f 7c 08 06 .......:\..|..
[ +0.920701] IPv4: martian source 10.85.0.12 from 10.85.0.12, on dev eth0
[ +0.000003] ll header: 00000000: ff ff ff ff ff ff d2 50 1c d3 1f 17 08 06 .......P......
[ +0.002962] IPv4: martian source 10.85.0.13 from 10.85.0.13, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 86 09 69 5a 94 d2 08 06 ........iZ....
[ +0.999987] IPv4: martian source 10.85.0.14 from 10.85.0.14, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff fa 88 03 51 34 f3 08 06 .........Q4...
[ +0.004235] IPv4: martian source 10.85.0.15 from 10.85.0.15, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 25 39 34 91 f2 08 06 .......%!.(MISSING)..
[ +6.380947] cgroup: cgroup2: unknown option "nsdelegate"
*
* ==> etcd [557ff658123d] <==
* 2021-06-09 01:40:48.647414 W | wal: sync duration of 1.103904697s, expected less than 1s
2021-06-09 01:40:48.753091 W | etcdserver: request "header:<ID:2289933000483394557 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" mod_revision:364 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" value_size:1214 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" > >>" with result "size:16" took too long (105.414042ms) to execute
2021-06-09 01:40:48.753496 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (250.229741ms) to execute
2021-06-09 01:40:48.753722 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-fb8b8dccf-ctgrx\" " with result "range_response_count:1 size:1770" took too long (891.632545ms) to execute
2021-06-09 01:40:50.467937 W | etcdserver: request "header:<ID:2289933000483394562 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:537 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:677 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (1.08693209s) to execute
2021-06-09 01:40:50.468037 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.566131533s) to execute
2021-06-09 01:40:50.468071 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210609012901-9941\" " with result "range_response_count:1 size:3347" took too long (1.710868913s) to execute
2021-06-09 01:40:50.468206 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-529qb.1686c662e29f9611\" " with result "range_response_count:1 size:597" took too long (928.182072ms) to execute
2021-06-09 01:40:51.483862 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-97rr9\" " with result "range_response_count:1 size:2147" took too long (1.013095215s) to execute
2021-06-09 01:41:12.976673 W | wal: sync duration of 1.117225227s, expected less than 1s
2021-06-09 01:41:13.114230 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3347" took too long (314.968585ms) to execute
2021-06-09 01:41:13.114284 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-lqx7b.1686c6626a0d515c\" " with result "range_response_count:1 size:550" took too long (1.100437486s) to execute
2021-06-09 01:41:13.114371 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:4 size:7785" took too long (687.507808ms) to execute
2021-06-09 01:41:13.114518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-8546d8b77b-lqx7b\" " with result "range_response_count:1 size:1851" took too long (1.101558003s) to execute
2021-06-09 01:41:13.114553 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (579.387664ms) to execute
2021-06-09 01:41:13.722674 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-lqx7b.1686c6626a0d9249\" " with result "range_response_count:1 size:511" took too long (603.050028ms) to execute
2021-06-09 01:41:13.722784 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210609012901-9941\" " with result "range_response_count:1 size:395" took too long (601.855298ms) to execute
2021-06-09 01:41:13.723059 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-node-lease\" " with result "range_response_count:1 size:187" took too long (573.108462ms) to execute
2021-06-09 01:41:15.464247 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (1.450534843s) to execute
2021-06-09 01:41:15.464304 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (166.55648ms) to execute
2021-06-09 01:41:15.464595 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (144.856126ms) to execute
2021-06-09 01:41:15.465036 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (1.527858302s) to execute
2021-06-09 01:41:15.465734 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (313.803884ms) to execute
2021-06-09 01:41:37.088502 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (579.483729ms) to execute
2021-06-09 01:41:57.525183 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (146.394885ms) to execute
*
* ==> kernel <==
* 01:42:03 up 1:24, 0 users, load average: 4.91, 3.39, 2.63
Linux old-k8s-version-20210609012901-9941 4.9.0-15-amd64 #1 SMP Debian 4.9.258-1 (2021-03-08) x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [7435c96f8972] <==
* I0609 01:41:51.475583 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:52.475740 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:52.475870 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:53.476020 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:53.476131 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:54.476295 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:54.476431 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:55.476606 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:55.476735 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:56.476937 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:56.477102 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:57.477291 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:57.477429 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:58.477563 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:58.477715 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:59.477874 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:59.478011 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:42:00.478169 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:42:00.478301 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:42:01.478453 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:42:01.478583 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:42:02.478748 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:42:02.478888 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:42:03.479048 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:42:03.479199 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
*
* ==> kube-controller-manager [76559266dc96] <==
* I0609 01:40:35.350957 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.355715 1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.359115 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"af7ffe92-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5d8978d65d to 1
E0609 01:40:35.361941 1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.362185 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.363976 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.365457 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.365465 1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.367928 1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.372059 1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.372481 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.441817 1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.441964 1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.442412 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.442440 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.464444 1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.464486 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.546527 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-529qb
I0609 01:40:35.546799 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-5c7t7
I0609 01:40:36.049812 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"af420efe-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-lqx7b
E0609 01:41:02.997582 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0609 01:41:05.550860 1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0609 01:41:33.249304 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0609 01:41:37.552663 1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0609 01:42:03.500854 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
*
* ==> kube-proxy [ef3565ebed50] <==
* W0609 01:40:33.954499 1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I0609 01:40:33.964131 1 server_others.go:148] Using iptables Proxier.
I0609 01:40:33.964802 1 server_others.go:178] Tearing down inactive rules.
E0609 01:40:34.154995 1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
I0609 01:40:35.290112 1 server.go:555] Version: v1.14.0
I0609 01:40:35.341044 1 config.go:202] Starting service config controller
I0609 01:40:35.341164 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0609 01:40:35.341748 1 config.go:102] Starting endpoints config controller
I0609 01:40:35.343249 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0609 01:40:35.441725 1 controller_utils.go:1034] Caches are synced for service config controller
I0609 01:40:35.443748 1 controller_utils.go:1034] Caches are synced for endpoints config controller
*
* ==> kube-scheduler [15294a1b99e5] <==
* W0609 01:40:10.688361 1 authentication.go:55] Authentication is disabled
I0609 01:40:10.688374 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0609 01:40:10.688743 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0609 01:40:12.981814 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0609 01:40:12.981916 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0609 01:40:12.982827 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0609 01:40:13.050964 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0609 01:40:13.062003 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0609 01:40:13.062138 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0609 01:40:13.062510 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0609 01:40:13.062930 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0609 01:40:13.064487 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0609 01:40:13.065331 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0609 01:40:13.982943 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0609 01:40:13.984017 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0609 01:40:13.985045 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0609 01:40:14.052710 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0609 01:40:14.063171 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0609 01:40:14.063859 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0609 01:40:14.065063 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0609 01:40:14.066262 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0609 01:40:14.067278 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0609 01:40:14.068396 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
I0609 01:40:15.890053 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0609 01:40:15.990228 1 controller_utils.go:1034] Caches are synced for scheduler controller
*
* ==> kubelet <==
* -- Logs begin at Wed 2021-06-09 01:34:39 UTC, end at Wed 2021-06-09 01:42:03 UTC. --
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434392 6233 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434450 6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434528 6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434593 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.702071 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
Jun 09 01:40:43 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:43.724887 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:40:44 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:44.734847 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:40:49 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:49.538510 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042394 6233 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042449 6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042530 6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042566 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:01 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:01.836699 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:09 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:09.538606 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:12 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:12.012609 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
Jun 09 01:41:21 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:21.011631 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.040969 6233 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041003 6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041051 6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041074 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:35 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:35.034469 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:39 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:39.538621 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:40 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:40.012660 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
Jun 09 01:41:52 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:52.011734 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:53 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:53.012733 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
*
* ==> kubernetes-dashboard [9230420d066a] <==
* 2021/06/09 01:40:37 Using namespace: kubernetes-dashboard
2021/06/09 01:40:37 Using in-cluster config to connect to apiserver
2021/06/09 01:40:37 Using secret token for csrf signing
2021/06/09 01:40:37 Initializing csrf token from kubernetes-dashboard-csrf secret
2021/06/09 01:40:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2021/06/09 01:40:37 Successful initial request to the apiserver, version: v1.14.0
2021/06/09 01:40:37 Generating JWE encryption key
2021/06/09 01:40:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2021/06/09 01:40:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2021/06/09 01:40:37 Initializing JWE encryption key from synchronized object
2021/06/09 01:40:37 Creating in-cluster Sidecar client
2021/06/09 01:40:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2021/06/09 01:40:37 Serving insecurely on HTTP port: 9090
2021/06/09 01:41:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2021/06/09 01:41:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2021/06/09 01:40:37 Starting overwatch
*
* ==> storage-provisioner [d27ec4783cae] <==
* I0609 01:40:36.443365 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0609 01:40:36.452888 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0609 01:40:36.452950 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0609 01:40:36.459951 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0609 01:40:36.460148 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d!
I0609 01:40:36.461060 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af273732-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d became leader
I0609 01:40:36.560264 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d!
-- /stdout --
helpers_test.go:250: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210609012901-9941 -n old-k8s-version-20210609012901-9941
E0609 01:42:04.497051 9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
helpers_test.go:257: (dbg) Run: kubectl --context old-k8s-version-20210609012901-9941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: metrics-server-8546d8b77b-lqx7b
helpers_test.go:265: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:268: (dbg) Run: kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b: exit status 1 (82.688697ms)
** stderr **
Error from server (NotFound): pods "metrics-server-8546d8b77b-lqx7b" not found
** /stderr **
helpers_test.go:270: kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b: exit status 1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:227: (dbg) Run: docker inspect old-k8s-version-20210609012901-9941
helpers_test.go:231: (dbg) docker inspect old-k8s-version-20210609012901-9941:
-- stdout --
[
{
"Id": "91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f",
"Created": "2021-06-09T01:32:22.976408213Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 300855,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-06-09T01:34:39.439780041Z",
"FinishedAt": "2021-06-09T01:34:37.912284168Z"
},
"Image": "sha256:9fce26cb202ecbcb479d0e9dcc943ed048e5957c0bb68667d9476ebc413ee6d7",
"ResolvConfPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/hostname",
"HostsPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/hosts",
"LogPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f-json.log",
"Name": "/old-k8s-version-20210609012901-9941",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-20210609012901-9941:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-20210609012901-9941",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614-init/diff:/var/lib/docker/overlay2/bc56a5d6f9b885d4e990c356e0ccfc01ecbed88f252ebfaa9441de3180832d7f/diff:/var/lib/docker/overlay2/25b993e35a4369dc1c3bb5a1579e6e35329eea51bcbd403abb32859a67061a54/diff:/var/lib/docker/overlay2/1fe8141f79894ceaa71723e3cebb26aaf6eb09b92957f7ef1ad563a53df17477/diff:/var/lib/docker/overlay2/c43074dca065bc9311721e20aecd4b6af65294c44e7d9ff6f84a18717d22f9da/diff:/var/lib/docker/overlay2/1318b2c7f3cf224a7ccebeb69bbc1127489945bbb88c21f3171770868a161187/diff:/var/lib/docker/overlay2/c38fd14f646377d81cc91524a921d99d0518ca09e12d17c45948037013fd9100/diff:/var/lib/docker/overlay2/3860f2d47e6d7da92eb5946fda824e25f4c789d00d7e8daa71d0200aac14b536/diff:/var/lib/docker/overlay2/f55aac0c255ec87a42f4d6bc6e79a51ccac3a1d472b1ef4565f141af1acedb04/diff:/var/lib/docker/overlay2/7a1f3b94ec1a7fec96e3f1c789cb025636706f45db2f63cafd48827296910d1d/diff:/var/lib/docker/overlay2/653b9d
24f60635898ac8c6e1b372c54937a708e1e483d47012bc30c58bba0c8c/diff:/var/lib/docker/overlay2/c1832b167afb6406029f607ff5bfad73774ce698299c2b90633d157123654c52/diff:/var/lib/docker/overlay2/75fc291915e6994891ddc9a151bd4c24056ab74e6c8428ba1aef2b2949bbc56e/diff:/var/lib/docker/overlay2/8187764e5fdd094760f8daef22c41c28995fd009c1c56d956db1bb78266b84b2/diff:/var/lib/docker/overlay2/8257db85fb8192780c9e79b131704c61b85e47f9e5c7152097b1a341d06f5840/diff:/var/lib/docker/overlay2/e7499e6556225f397b775719266146f16285f25036f4cf348b09e2fd3be18982/diff:/var/lib/docker/overlay2/84dea696e080b4925128f5b32c22c548c34a63a9dfafa5cb45a932dded279620/diff:/var/lib/docker/overlay2/0646a50eb26264b2a4349823800615095034ab376268714c37e1193106307a2a/diff:/var/lib/docker/overlay2/873d4336e86132442a84ef0da60e4f8fdf8e4989093c0f2a4279120e10ad4f2c/diff:/var/lib/docker/overlay2/44007c68fc2016e815ed96a5faadd25bfb35c362bf1b0521c430ef2ea3805f42/diff:/var/lib/docker/overlay2/7f832f8cf06c783bc6789b50392d803201e52f6baa4a788b5ce48169c94316eb/diff:/var/lib/d
ocker/overlay2/aa919f3d56d7f8b40e56ee381db724e83ee09c96eb696e67326ae47e81324228/diff:/var/lib/docker/overlay2/c53704cae60bb8bd8b355c2d6fb142c9e105dbfeeece4ba9ee0eb81aaaa83fe9/diff:/var/lib/docker/overlay2/1d80475a809da44174d557238fbb00860567d808a157fc2291ac5fedb6f8b2d2/diff:/var/lib/docker/overlay2/d7e1256a346a88b7ce7e6fe9d6ab1146a2c7705c99fcb974ad10b671573b6b83/diff:/var/lib/docker/overlay2/67dc882ee4f992f5a9dc58b56bf7d7a6e78ffe50ccd6227d33d9e2047b7ff877/diff:/var/lib/docker/overlay2/156a8e643f241fdf84afe135ad766dbedd0c515a725939d012de628eb9dd2013/diff:/var/lib/docker/overlay2/ee244a7deb19ed9dc719af435d92c54624874690ce0999c7d030e2f57ecb9e6a/diff:/var/lib/docker/overlay2/91f8a889599c1faaa7f40cc449793deff620d17e83e88dac22c223f131237b12/diff:/var/lib/docker/overlay2/fa8fc61ecf97cd7f2b96efc9d54ba3d9a5b32dcdbb844f360ee173af8fae43a7/diff:/var/lib/docker/overlay2/908106b57878c9eeda6e0d202eee052dee30050250f2a3e5c7d61739d6548623/diff:/var/lib/docker/overlay2/98083c942683a1ac5defcb4b953ba78bbab830ad8c88c4dd145379ebe55
e20a9/diff:/var/lib/docker/overlay2/980703603c9fd3a987c703f9800e56f69031cc7d19f3c692d95eb0937cbb5fd7/diff:/var/lib/docker/overlay2/bc7be9aeb566f06fe346d144629a571aec3e378e82aedf4d6c3fb065569091b2/diff:/var/lib/docker/overlay2/e61aabb9eb2161801d4795e4a00f41afd54c504a52aeeef70d49d2a4f47fcd99/diff:/var/lib/docker/overlay2/a69e80d9160e6158cf9f37881d60928bf3221341b1fffe8d2855488233278102/diff:/var/lib/docker/overlay2/f76fd1ba3588d22f5228ab597df7a62e20a79217c1712dbc33e20061e12891c6/diff",
"MergedDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/merged",
"UpperDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/diff",
"WorkDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-20210609012901-9941",
"Source": "/var/lib/docker/volumes/old-k8s-version-20210609012901-9941/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-20210609012901-9941",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-20210609012901-9941",
"name.minikube.sigs.k8s.io": "old-k8s-version-20210609012901-9941",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "1aaecc7a078c61af85d4e6c7c12ffcbc3226c3c0b6bdcdb83ef76e454d99e1ed",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32960"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32959"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32956"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32958"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32957"
}
]
},
"SandboxKey": "/var/run/docker/netns/1aaecc7a078c",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-20210609012901-9941": {
"IPAMConfig": {
"IPv4Address": "192.168.67.2"
},
"Links": null,
"Aliases": [
"91dce77935ba"
],
"NetworkID": "3b40e12707af96d7a87ef0baaec85159df278a3dc4bf817ecae3932e0bcfbdd2",
"EndpointID": "c1650ce3840b80594246acc2f9fcfa432a39e6b48bada03c110930f25ecac707",
"Gateway": "192.168.67.1",
"IPAddress": "192.168.67.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:43:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:235: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210609012901-9941 -n old-k8s-version-20210609012901-9941
helpers_test.go:240: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:243: (dbg) Run: out/minikube-linux-amd64 -p old-k8s-version-20210609012901-9941 logs -n 25
helpers_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20210609012901-9941 logs -n 25: (1.105633065s)
helpers_test.go:248: TestStartStop/group/old-k8s-version/serial/Pause logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
| addons | enable dashboard -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:37:48 UTC | Wed, 09 Jun 2021 01:37:48 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | |
| start | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:31:33 UTC | Wed, 09 Jun 2021 01:37:54 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --embed-certs | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.20.7 | | | | | |
| ssh | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:05 UTC | Wed, 09 Jun 2021 01:38:05 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| | sudo crictl images -o json | | | | | |
| pause | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:05 UTC | Wed, 09 Jun 2021 01:38:06 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| start | -p newest-cni-20210609013655-9941 --memory=2200 | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:37:48 UTC | Wed, 09 Jun 2021 01:38:07 UTC |
| | --alsologtostderr --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true --network-plugin=cni | | | | | |
| | --extra-config=kubelet.network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 | | | | | |
| | --driver=docker --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.22.0-alpha.2 | | | | | |
| unpause | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:07 UTC | Wed, 09 Jun 2021 01:38:07 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:07 UTC | Wed, 09 Jun 2021 01:38:08 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| | sudo crictl images -o json | | | | | |
| pause | -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:08 UTC | Wed, 09 Jun 2021 01:38:08 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:09 UTC | Wed, 09 Jun 2021 01:38:10 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:08 UTC | Wed, 09 Jun 2021 01:38:11 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| delete | -p | embed-certs-20210609012903-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:11 UTC | Wed, 09 Jun 2021 01:38:12 UTC |
| | embed-certs-20210609012903-9941 | | | | | |
| delete | -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:11 UTC | Wed, 09 Jun 2021 01:38:14 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| delete | -p | newest-cni-20210609013655-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:14 UTC | Wed, 09 Jun 2021 01:38:14 UTC |
| | newest-cni-20210609013655-9941 | | | | | |
| start | -p false-20210609012810-9941 | false-20210609012810-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:14 UTC | Wed, 09 Jun 2021 01:39:52 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=false --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p false-20210609012810-9941 | false-20210609012810-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:39:52 UTC | Wed, 09 Jun 2021 01:39:52 UTC |
| | pgrep -a kubelet | | | | | |
| delete | -p false-20210609012810-9941 | false-20210609012810-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:07 UTC | Wed, 09 Jun 2021 01:40:10 UTC |
| start | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:32:07 UTC | Wed, 09 Jun 2021 01:40:19 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --apiserver-port=8444 | | | | | |
| | --driver=docker --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.20.7 | | | | | |
| ssh | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:29 UTC | Wed, 09 Jun 2021 01:40:30 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| | sudo crictl images -o json | | | | | |
| pause | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:30 UTC | Wed, 09 Jun 2021 01:40:30 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:31 UTC | Wed, 09 Jun 2021 01:40:32 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:32 UTC | Wed, 09 Jun 2021 01:40:36 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| delete | -p | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:36 UTC | Wed, 09 Jun 2021 01:40:36 UTC |
| | default-k8s-different-port-20210609012935-9941 | | | | | |
| start | -p | old-k8s-version-20210609012901-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:34:38 UTC | Wed, 09 Jun 2021 01:41:48 UTC |
| | old-k8s-version-20210609012901-9941 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.14.0 | | | | | |
| ssh | -p | old-k8s-version-20210609012901-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:41:58 UTC | Wed, 09 Jun 2021 01:41:59 UTC |
| | old-k8s-version-20210609012901-9941 | | | | | |
| | sudo crictl images -o json | | | | | |
| -p | old-k8s-version-20210609012901-9941 | old-k8s-version-20210609012901-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:42:02 UTC | Wed, 09 Jun 2021 01:42:04 UTC |
| | logs -n 25 | | | | | |
|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/06/09 01:40:36
Running on machine: debian-jenkins-agent-1
Binary: Built with gc go1.16.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0609 01:40:36.631110 352096 out.go:291] Setting OutFile to fd 1 ...
I0609 01:40:36.631229 352096 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0609 01:40:36.631240 352096 out.go:304] Setting ErrFile to fd 2...
I0609 01:40:36.631245 352096 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0609 01:40:36.631477 352096 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
I0609 01:40:36.632033 352096 out.go:298] Setting JSON to false
I0609 01:40:36.673982 352096 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":5000,"bootTime":1623197837,"procs":265,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I0609 01:40:36.674111 352096 start.go:121] virtualization: kvm guest
I0609 01:40:36.676163 352096 out.go:170] * [calico-20210609012810-9941] minikube v1.21.0-beta.0 on Debian 9.13 (kvm/amd64)
I0609 01:40:36.678185 352096 out.go:170] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
I0609 01:40:36.679873 352096 out.go:170] - MINIKUBE_BIN=out/minikube-linux-amd64
I0609 01:40:36.681411 352096 out.go:170] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube
I0609 01:40:36.683678 352096 out.go:170] - MINIKUBE_LOCATION=11610
I0609 01:40:36.685630 352096 driver.go:335] Setting default libvirt URI to qemu:///system
I0609 01:40:36.743399 352096 docker.go:132] docker version: linux-19.03.15
I0609 01:40:36.743512 352096 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0609 01:40:36.834766 352096 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:133 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-06-09 01:40:36.791625716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0609 01:40:36.834840 352096 docker.go:244] overlay module found
I0609 01:40:36.837087 352096 out.go:170] * Using the docker driver based on user configuration
I0609 01:40:36.837110 352096 start.go:279] selected driver: docker
I0609 01:40:36.837115 352096 start.go:752] validating driver "docker" against <nil>
I0609 01:40:36.837133 352096 start.go:763] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0609 01:40:36.837178 352096 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0609 01:40:36.837196 352096 out.go:235] ! Your cgroup does not allow setting memory.
I0609 01:40:36.838992 352096 out.go:170] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0609 01:40:36.839863 352096 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0609 01:40:36.932062 352096 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:133 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-06-09 01:40:36.890557056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0609 01:40:36.932180 352096 start_flags.go:259] no existing cluster config was found, will generate one from the flags
I0609 01:40:36.932334 352096 start_flags.go:656] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0609 01:40:36.932354 352096 cni.go:93] Creating CNI manager for "calico"
I0609 01:40:36.932360 352096 start_flags.go:268] Found "Calico" CNI - setting NetworkPlugin=cni
I0609 01:40:36.932385 352096 start_flags.go:273] config:
{Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0609 01:40:36.934649 352096 out.go:170] * Starting control plane node calico-20210609012810-9941 in cluster calico-20210609012810-9941
I0609 01:40:36.934693 352096 cache.go:115] Beginning downloading kic base image for docker with docker
I0609 01:40:36.936147 352096 out.go:170] * Pulling base image ...
I0609 01:40:36.936172 352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0609 01:40:36.936194 352096 preload.go:125] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4
I0609 01:40:36.936205 352096 cache.go:54] Caching tarball of preloaded images
I0609 01:40:36.936277 352096 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
I0609 01:40:36.936357 352096 preload.go:166] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0609 01:40:36.936376 352096 cache.go:57] Finished verifying existence of preloaded tar for v1.20.7 on docker
I0609 01:40:36.936388 352096 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory
I0609 01:40:36.936410 352096 image.go:61] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory, skipping pull
I0609 01:40:36.936420 352096 image.go:102] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in cache, skipping pull
I0609 01:40:36.936434 352096 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 as a tarball
I0609 01:40:36.936440 352096 image.go:74] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon
I0609 01:40:36.936479 352096 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json ...
I0609 01:40:36.936497 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json: {Name:mk031fde7609ae3e97daec785ed839e7488473bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:37.048612 352096 image.go:78] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon, skipping pull
I0609 01:40:37.048657 352096 cache.go:146] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in daemon, skipping load
I0609 01:40:37.048675 352096 cache.go:202] Successfully downloaded all kic artifacts
I0609 01:40:37.048728 352096 start.go:313] acquiring machines lock for calico-20210609012810-9941: {Name:mkae53a330b20aaf52e1813b8aee573fcaaec970 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0609 01:40:37.048858 352096 start.go:317] acquired machines lock for "calico-20210609012810-9941" in 106.275µs
I0609 01:40:37.048894 352096 start.go:89] Provisioning new machine with config: &{Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
I0609 01:40:37.049004 352096 start.go:126] createHost starting for "" (driver="docker")
I0609 01:40:34.017726 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:37.085772 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:35.678351 300573 out.go:170] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0609 01:40:35.678380 300573 addons.go:344] enableAddons completed in 2.095265934s
I0609 01:40:35.865805 300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:38.366329 300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:35.493169 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:35.992256 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:36.492949 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:36.992808 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:37.492406 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:37.992460 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:38.492814 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:38.993013 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:39.492346 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:39.992376 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:37.051194 352096 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0609 01:40:37.051469 352096 start.go:160] libmachine.API.Create for "calico-20210609012810-9941" (driver="docker")
I0609 01:40:37.051513 352096 client.go:168] LocalClient.Create starting
I0609 01:40:37.051649 352096 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem
I0609 01:40:37.051689 352096 main.go:128] libmachine: Decoding PEM data...
I0609 01:40:37.051712 352096 main.go:128] libmachine: Parsing certificate...
I0609 01:40:37.051880 352096 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem
I0609 01:40:37.051910 352096 main.go:128] libmachine: Decoding PEM data...
I0609 01:40:37.051926 352096 main.go:128] libmachine: Parsing certificate...
I0609 01:40:37.052424 352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0609 01:40:37.099637 352096 cli_runner.go:162] docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0609 01:40:37.099719 352096 network_create.go:255] running [docker network inspect calico-20210609012810-9941] to gather additional debugging logs...
I0609 01:40:37.099742 352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941
W0609 01:40:37.138707 352096 cli_runner.go:162] docker network inspect calico-20210609012810-9941 returned with exit code 1
I0609 01:40:37.138742 352096 network_create.go:258] error running [docker network inspect calico-20210609012810-9941]: docker network inspect calico-20210609012810-9941: exit status 1
stdout:
[]
stderr:
Error: No such network: calico-20210609012810-9941
I0609 01:40:37.138765 352096 network_create.go:260] output of [docker network inspect calico-20210609012810-9941]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: calico-20210609012810-9941
** /stderr **
I0609 01:40:37.138809 352096 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0609 01:40:37.177770 352096 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3efa0710be1e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6f:c7:95:89}}
I0609 01:40:37.178451 352096 network.go:263] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00072a3b8] misses:0}
I0609 01:40:37.178494 352096 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0609 01:40:37.178511 352096 network_create.go:106] attempt to create docker network calico-20210609012810-9941 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0609 01:40:37.178562 352096 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210609012810-9941
I0609 01:40:37.256968 352096 network_create.go:90] docker network calico-20210609012810-9941 192.168.58.0/24 created
I0609 01:40:37.257004 352096 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20210609012810-9941" container
I0609 01:40:37.257070 352096 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0609 01:40:37.300737 352096 cli_runner.go:115] Run: docker volume create calico-20210609012810-9941 --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --label created_by.minikube.sigs.k8s.io=true
I0609 01:40:37.340542 352096 oci.go:102] Successfully created a docker volume calico-20210609012810-9941
I0609 01:40:37.340623 352096 cli_runner.go:115] Run: docker run --rm --name calico-20210609012810-9941-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --entrypoint /usr/bin/test -v calico-20210609012810-9941:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib
I0609 01:40:38.148995 352096 oci.go:106] Successfully prepared a docker volume calico-20210609012810-9941
W0609 01:40:38.149052 352096 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0609 01:40:38.149065 352096 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0609 01:40:38.149126 352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0609 01:40:38.149132 352096 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0609 01:40:38.149158 352096 kic.go:179] Starting extracting preloaded images to volume ...
I0609 01:40:38.149224 352096 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210609012810-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir
I0609 01:40:38.241538 352096 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210609012810-9941 --name calico-20210609012810-9941 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210609012810-9941 --network calico-20210609012810-9941 --ip 192.168.58.2 --volume calico-20210609012810-9941:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45
I0609 01:40:38.853918 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Running}}
I0609 01:40:38.906203 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:38.959124 352096 cli_runner.go:115] Run: docker exec calico-20210609012810-9941 stat /var/lib/dpkg/alternatives/iptables
I0609 01:40:39.108798 352096 oci.go:278] the created container "calico-20210609012810-9941" has a running status.
I0609 01:40:39.108836 352096 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa...
I0609 01:40:39.198235 352096 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0609 01:40:39.602006 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:39.652085 352096 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0609 01:40:39.652109 352096 kic_runner.go:115] Args: [docker exec --privileged calico-20210609012810-9941 chown docker:docker /home/docker/.ssh/authorized_keys]
I0609 01:40:40.132328 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:40.865096 300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:42.865643 300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:41.950654 352096 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210609012810-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir: (3.801357977s)
I0609 01:40:41.950723 352096 kic.go:188] duration metric: took 3.801562 seconds to extract preloaded images to volume
I0609 01:40:41.950817 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:41.990470 352096 machine.go:88] provisioning docker machine ...
I0609 01:40:41.990506 352096 ubuntu.go:169] provisioning hostname "calico-20210609012810-9941"
I0609 01:40:41.990596 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:42.031665 352096 main.go:128] libmachine: Using SSH client type: native
I0609 01:40:42.031889 352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32985 <nil> <nil>}
I0609 01:40:42.031912 352096 main.go:128] libmachine: About to run SSH command:
sudo hostname calico-20210609012810-9941 && echo "calico-20210609012810-9941" | sudo tee /etc/hostname
I0609 01:40:42.168989 352096 main.go:128] libmachine: SSH cmd err, output: <nil>: calico-20210609012810-9941
I0609 01:40:42.169058 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:42.214838 352096 main.go:128] libmachine: Using SSH client type: native
I0609 01:40:42.214999 352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32985 <nil> <nil>}
I0609 01:40:42.215023 352096 main.go:128] libmachine: About to run SSH command:
if ! grep -xq '.*\scalico-20210609012810-9941' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210609012810-9941/g' /etc/hosts;
else
echo '127.0.1.1 calico-20210609012810-9941' | sudo tee -a /etc/hosts;
fi
fi
I0609 01:40:42.332932 352096 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0609 01:40:42.332992 352096 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
I0609 01:40:42.333032 352096 ubuntu.go:177] setting up certificates
I0609 01:40:42.333040 352096 provision.go:83] configureAuth start
I0609 01:40:42.333091 352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
I0609 01:40:42.372958 352096 provision.go:137] copyHostCerts
I0609 01:40:42.373013 352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
I0609 01:40:42.373030 352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
I0609 01:40:42.373084 352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
I0609 01:40:42.373174 352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
I0609 01:40:42.373185 352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
I0609 01:40:42.373208 352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
I0609 01:40:42.373272 352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
I0609 01:40:42.373298 352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
I0609 01:40:42.373324 352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
I0609 01:40:42.373372 352096 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.calico-20210609012810-9941 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20210609012810-9941]
I0609 01:40:42.470940 352096 provision.go:171] copyRemoteCerts
I0609 01:40:42.470996 352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0609 01:40:42.471030 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:42.516819 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:42.604293 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0609 01:40:42.620326 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0609 01:40:42.635125 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0609 01:40:42.650438 352096 provision.go:86] duration metric: configureAuth took 317.389022ms
I0609 01:40:42.650459 352096 ubuntu.go:193] setting minikube options for container-runtime
I0609 01:40:42.650643 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:42.690608 352096 main.go:128] libmachine: Using SSH client type: native
I0609 01:40:42.690768 352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32985 <nil> <nil>}
I0609 01:40:42.690789 352096 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0609 01:40:42.809400 352096 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
I0609 01:40:42.809436 352096 ubuntu.go:71] root file system type: overlay
I0609 01:40:42.809629 352096 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0609 01:40:42.809695 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:42.849952 352096 main.go:128] libmachine: Using SSH client type: native
I0609 01:40:42.850124 352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32985 <nil> <nil>}
I0609 01:40:42.850223 352096 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0609 01:40:42.982970 352096 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0609 01:40:42.983065 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:43.031885 352096 main.go:128] libmachine: Using SSH client type: native
I0609 01:40:43.032086 352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32985 <nil> <nil>}
I0609 01:40:43.032118 352096 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0609 01:40:43.625675 352096 main.go:128] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-09 01:40:42.981589018 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0609 01:40:43.625711 352096 machine.go:91] provisioned docker machine in 1.635218617s
I0609 01:40:43.625725 352096 client.go:171] LocalClient.Create took 6.574201593s
I0609 01:40:43.625748 352096 start.go:168] duration metric: libmachine.API.Create for "calico-20210609012810-9941" took 6.574278241s
I0609 01:40:43.625761 352096 start.go:267] post-start starting for "calico-20210609012810-9941" (driver="docker")
I0609 01:40:43.625768 352096 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0609 01:40:43.625839 352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0609 01:40:43.625883 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:43.667182 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:43.752939 352096 ssh_runner.go:149] Run: cat /etc/os-release
I0609 01:40:43.755722 352096 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0609 01:40:43.755749 352096 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0609 01:40:43.755763 352096 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0609 01:40:43.755771 352096 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0609 01:40:43.755788 352096 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
I0609 01:40:43.755837 352096 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
I0609 01:40:43.755931 352096 start.go:270] post-start completed in 130.162299ms
I0609 01:40:43.756175 352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
I0609 01:40:43.794853 352096 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json ...
I0609 01:40:43.795091 352096 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0609 01:40:43.795138 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:43.833691 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:43.917790 352096 start.go:129] duration metric: createHost completed in 6.868772218s
I0609 01:40:43.917824 352096 start.go:80] releasing machines lock for "calico-20210609012810-9941", held for 6.868947784s
I0609 01:40:43.917911 352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
I0609 01:40:43.958012 352096 ssh_runner.go:149] Run: systemctl --version
I0609 01:40:43.958067 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:43.958087 352096 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0609 01:40:43.958148 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:40:43.999990 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:44.000156 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:44.105048 352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0609 01:40:44.113782 352096 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0609 01:40:44.122327 352096 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0609 01:40:44.122397 352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0609 01:40:44.130910 352096 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0609 01:40:44.142773 352096 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0609 01:40:44.201078 352096 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0609 01:40:44.256269 352096 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0609 01:40:44.264833 352096 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0609 01:40:44.317328 352096 ssh_runner.go:149] Run: sudo systemctl start docker
I0609 01:40:44.325668 352096 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0609 01:40:40.492907 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:40.992189 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:41.493228 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:41.993005 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:42.492386 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:42.992261 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:43.493058 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:43.993022 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:44.492490 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:44.993036 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:44.373093 352096 out.go:197] * Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
I0609 01:40:44.373166 352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0609 01:40:44.410011 352096 ssh_runner.go:149] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0609 01:40:44.413077 352096 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0609 01:40:44.422262 352096 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.crt
I0609 01:40:44.422356 352096 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.key
I0609 01:40:44.422503 352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0609 01:40:44.422549 352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:40:44.461776 352096 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:40:44.461803 352096 docker.go:466] Images already preloaded, skipping extraction
I0609 01:40:44.461856 352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:40:44.498947 352096 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:40:44.498975 352096 cache_images.go:74] Images are preloaded, skipping loading
I0609 01:40:44.499029 352096 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0609 01:40:44.584207 352096 cni.go:93] Creating CNI manager for "calico"
I0609 01:40:44.584229 352096 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0609 01:40:44.584247 352096 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210609012810-9941 NodeName:calico-20210609012810-9941 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0609 01:40:44.584403 352096 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "calico-20210609012810-9941"
kubeletExtraArgs:
node-ip: 192.168.58.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
I0609 01:40:44.584487 352096 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20210609012810-9941 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
[Install]
config:
{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
I0609 01:40:44.584549 352096 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0609 01:40:44.591407 352096 binaries.go:44] Found k8s binaries, skipping transfer
I0609 01:40:44.591476 352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0609 01:40:44.597626 352096 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
I0609 01:40:44.609338 352096 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0609 01:40:44.620431 352096 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1885 bytes)
I0609 01:40:44.631725 352096 ssh_runner.go:149] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0609 01:40:44.634357 352096 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0609 01:40:44.642326 352096 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941 for IP: 192.168.58.2
I0609 01:40:44.642377 352096 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key
I0609 01:40:44.642394 352096 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key
I0609 01:40:44.642461 352096 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.key
I0609 01:40:44.642481 352096 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041
I0609 01:40:44.642488 352096 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0609 01:40:44.840681 352096 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 ...
I0609 01:40:44.840717 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041: {Name:mkfc84e07035095def340a1ef0c06b8c2f56c745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:44.840897 352096 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041 ...
I0609 01:40:44.840910 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041: {Name:mk3b1eccc9f0abe0f237561b0ecff13d04e9dd19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:44.840989 352096 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt
I0609 01:40:44.841051 352096 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key
I0609 01:40:44.841102 352096 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key
I0609 01:40:44.841112 352096 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt with IP's: []
I0609 01:40:44.915955 352096 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt ...
I0609 01:40:44.915989 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt: {Name:mkf48058b2fd1c7451a636bd94c7654745c05033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:44.916188 352096 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key ...
I0609 01:40:44.916206 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key: {Name:mke09647dda418d05401ddeb31cf7b4c662417a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:44.916415 352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem (1338 bytes)
W0609 01:40:44.916467 352096 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941_empty.pem, impossibly tiny 0 bytes
I0609 01:40:44.916486 352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem (1675 bytes)
I0609 01:40:44.916523 352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem (1082 bytes)
I0609 01:40:44.916559 352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem (1123 bytes)
I0609 01:40:44.916590 352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem (1679 bytes)
I0609 01:40:44.917800 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0609 01:40:44.937170 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0609 01:40:44.956373 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0609 01:40:44.974933 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0609 01:40:44.991731 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0609 01:40:45.008489 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0609 01:40:45.031606 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0609 01:40:45.047895 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0609 01:40:45.064667 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem --> /usr/share/ca-certificates/9941.pem (1338 bytes)
I0609 01:40:45.080936 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0609 01:40:45.096059 352096 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
I0609 01:40:45.107015 352096 ssh_runner.go:149] Run: openssl version
I0609 01:40:45.111407 352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0609 01:40:45.119189 352096 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0609 01:40:45.121891 352096 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jun 9 00:58 /usr/share/ca-certificates/minikubeCA.pem
I0609 01:40:45.121925 352096 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0609 01:40:45.126118 352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0609 01:40:45.132551 352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9941.pem && ln -fs /usr/share/ca-certificates/9941.pem /etc/ssl/certs/9941.pem"
I0609 01:40:45.138926 352096 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9941.pem
I0609 01:40:45.141619 352096 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jun 9 01:04 /usr/share/ca-certificates/9941.pem
I0609 01:40:45.141657 352096 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9941.pem
I0609 01:40:45.145814 352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9941.pem /etc/ssl/certs/51391683.0"
I0609 01:40:45.152149 352096 kubeadm.go:390] StartCluster: {Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0609 01:40:45.152257 352096 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0609 01:40:45.187288 352096 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0609 01:40:45.193888 352096 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0609 01:40:45.201487 352096 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0609 01:40:45.201538 352096 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0609 01:40:45.207661 352096 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0609 01:40:45.207713 352096 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0609 01:40:43.186787 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:46.229769 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:45.365532 300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:45.492939 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:45.992622 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:46.493059 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:46.992661 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:48.750771 344705 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.758074457s)
I0609 01:40:48.993021 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:49.269941 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:52.311061 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:51.493556 344705 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.500498227s)
I0609 01:40:51.992230 344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:52.180627 344705 kubeadm.go:985] duration metric: took 19.939502771s to wait for elevateKubeSystemPrivileges.
I0609 01:40:52.180659 344705 kubeadm.go:392] StartCluster complete in 33.745162361s
I0609 01:40:52.180680 344705 settings.go:142] acquiring lock: {Name:mk8746ecf7d8ca6a3508d1e45e55db2314c0e73d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:52.180766 344705 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
I0609 01:40:52.182512 344705 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig: {Name:mk288d2c4fafd90028bf76db1824dfec28d92db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:40:52.757936 344705 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20210609012810-9941" rescaled to 1
I0609 01:40:52.758013 344705 start.go:214] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
I0609 01:40:52.759853 344705 out.go:170] * Verifying Kubernetes components...
I0609 01:40:52.758135 344705 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0609 01:40:52.759935 344705 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0609 01:40:52.758167 344705 addons.go:342] enableAddons start: toEnable=map[], additional=[]
I0609 01:40:52.760010 344705 addons.go:59] Setting storage-provisioner=true in profile "cilium-20210609012810-9941"
I0609 01:40:52.758404 344705 cache.go:108] acquiring lock: {Name:mk2dd9808d496cd84c38482eea9e354a60be2885 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0609 01:40:52.760030 344705 addons.go:59] Setting default-storageclass=true in profile "cilium-20210609012810-9941"
I0609 01:40:52.760049 344705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20210609012810-9941"
I0609 01:40:52.760062 344705 addons.go:135] Setting addon storage-provisioner=true in "cilium-20210609012810-9941"
W0609 01:40:52.760082 344705 addons.go:147] addon storage-provisioner should already be in state true
I0609 01:40:52.760090 344705 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 exists
I0609 01:40:52.760113 344705 host.go:66] Checking if "cilium-20210609012810-9941" exists ...
I0609 01:40:52.760111 344705 cache.go:97] cache image "minikube-local-cache-test:functional-20210609010438-9941" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941" took 1.718093ms
I0609 01:40:52.760126 344705 cache.go:81] save to tar file minikube-local-cache-test:functional-20210609010438-9941 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 succeeded
I0609 01:40:52.760140 344705 cache.go:88] Successfully saved all images to host disk.
I0609 01:40:52.760541 344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:52.760709 344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:52.761714 344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:50.469695 300573 pod_ready.go:92] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"True"
I0609 01:40:50.469731 300573 pod_ready.go:81] duration metric: took 16.612054385s waiting for pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace to be "Ready" ...
I0609 01:40:50.469746 300573 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97rr9" in "kube-system" namespace to be "Ready" ...
I0609 01:40:51.488708 300573 pod_ready.go:92] pod "kube-proxy-97rr9" in "kube-system" namespace has status "Ready":"True"
I0609 01:40:51.488734 300573 pod_ready.go:81] duration metric: took 1.018979544s waiting for pod "kube-proxy-97rr9" in "kube-system" namespace to be "Ready" ...
I0609 01:40:51.488744 300573 pod_ready.go:38] duration metric: took 17.633659357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0609 01:40:51.488765 300573 api_server.go:50] waiting for apiserver process to appear ...
I0609 01:40:51.488807 300573 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0609 01:40:51.520972 300573 api_server.go:70] duration metric: took 17.937884491s to wait for apiserver process to appear ...
I0609 01:40:51.520999 300573 api_server.go:86] waiting for apiserver healthz status ...
I0609 01:40:51.521011 300573 api_server.go:223] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0609 01:40:51.525448 300573 api_server.go:249] https://192.168.67.2:8443/healthz returned 200:
ok
I0609 01:40:51.526192 300573 api_server.go:139] control plane version: v1.14.0
I0609 01:40:51.526211 300573 api_server.go:129] duration metric: took 5.206469ms to wait for apiserver health ...
I0609 01:40:51.526219 300573 system_pods.go:43] waiting for kube-system pods to appear ...
I0609 01:40:51.528829 300573 system_pods.go:59] 4 kube-system pods found
I0609 01:40:51.528851 300573 system_pods.go:61] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.528856 300573 system_pods.go:61] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.528865 300573 system_pods.go:61] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:51.528871 300573 system_pods.go:61] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.528887 300573 system_pods.go:74] duration metric: took 2.66306ms to wait for pod list to return data ...
I0609 01:40:51.528896 300573 default_sa.go:34] waiting for default service account to be created ...
I0609 01:40:51.531122 300573 default_sa.go:45] found service account: "default"
I0609 01:40:51.531139 300573 default_sa.go:55] duration metric: took 2.23539ms for default service account to be created ...
I0609 01:40:51.531146 300573 system_pods.go:116] waiting for k8s-apps to be running ...
I0609 01:40:51.536460 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:51.536487 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.536494 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.536504 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:51.536517 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.536541 300573 retry.go:31] will retry after 214.282984ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:51.755301 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:51.755331 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.755339 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.755348 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:51.755355 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:51.755369 300573 retry.go:31] will retry after 293.852686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:52.053824 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:52.053857 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.053865 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.053880 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:52.053892 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.053908 300573 retry.go:31] will retry after 355.089487ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:52.413227 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:52.413262 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.413272 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.413282 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:52.413289 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.413304 300573 retry.go:31] will retry after 480.685997ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:52.898013 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:52.898051 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.898059 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.898071 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:52.898078 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:52.898093 300573 retry.go:31] will retry after 544.138839ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:53.446671 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:53.446706 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:53.446713 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:53.446722 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:53.446728 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:53.446742 300573 retry.go:31] will retry after 684.014074ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:52.840705 344705 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0609 01:40:52.840860 344705 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0609 01:40:52.840873 344705 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0609 01:40:52.840938 344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
I0609 01:40:52.820388 344705 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:40:52.841301 344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
I0609 01:40:52.823016 344705 addons.go:135] Setting addon default-storageclass=true in "cilium-20210609012810-9941"
W0609 01:40:52.841379 344705 addons.go:147] addon default-storageclass should already be in state true
I0609 01:40:52.841434 344705 host.go:66] Checking if "cilium-20210609012810-9941" exists ...
I0609 01:40:52.841999 344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
I0609 01:40:52.875619 344705 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0609 01:40:52.878520 344705 node_ready.go:35] waiting up to 5m0s for node "cilium-20210609012810-9941" to be "Ready" ...
I0609 01:40:52.883106 344705 node_ready.go:49] node "cilium-20210609012810-9941" has status "Ready":"True"
I0609 01:40:52.883125 344705 node_ready.go:38] duration metric: took 4.566542ms waiting for node "cilium-20210609012810-9941" to be "Ready" ...
I0609 01:40:52.883135 344705 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0609 01:40:52.901282 344705 pod_ready.go:78] waiting up to 5m0s for pod "cilium-2rdhk" in "kube-system" namespace to be "Ready" ...
I0609 01:40:52.905753 344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:52.913698 344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:52.924428 344705 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0609 01:40:52.924451 344705 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0609 01:40:52.924507 344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
I0609 01:40:52.985429 344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
I0609 01:40:53.093158 344705 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0609 01:40:53.182043 344705 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0609 01:40:53.354533 344705 start.go:725] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
I0609 01:40:53.354610 344705 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:40:53.354626 344705 docker.go:541] minikube-local-cache-test:functional-20210609010438-9941 wasn't preloaded
I0609 01:40:53.354641 344705 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210609010438-9941]
I0609 01:40:53.355651 344705 image.go:133] retrieving image: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:40:53.355676 344705 image.go:139] checking repository: index.docker.io/library/minikube-local-cache-test
I0609 01:40:53.588602 344705 out.go:170] * Enabled addons: storage-provisioner, default-storageclass
I0609 01:40:53.588639 344705 addons.go:344] enableAddons completed in 830.486904ms
W0609 01:40:54.204447 344705 image.go:146] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
I0609 01:40:54.204502 344705 image.go:147] short name: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:40:54.205330 344705 image.go:175] daemon lookup for minikube-local-cache-test:functional-20210609010438-9941: Error response from daemon: reference does not exist
W0609 01:40:54.817533 344705 image.go:185] authn lookup for minikube-local-cache-test:functional-20210609010438-9941 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:40:54.940307 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:55.379843 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:54.134198 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:54.134226 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:54.134231 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:54.134238 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:54.134242 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:54.134254 300573 retry.go:31] will retry after 1.039068543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:55.178626 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:55.178662 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:55.178669 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:55.178679 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:55.178691 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:55.178707 300573 retry.go:31] will retry after 1.02433744s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:56.206796 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:56.206822 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:56.206828 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:56.206835 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:56.206839 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:56.206851 300573 retry.go:31] will retry after 1.268973106s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:57.480720 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:57.480751 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:57.480759 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:57.480771 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:57.480778 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:57.480796 300573 retry.go:31] will retry after 1.733071555s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:40:55.410467 344705 image.go:189] remote lookup for minikube-local-cache-test:functional-20210609010438-9941: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:40:55.410515 344705 image.go:92] error retrieve Image minikube-local-cache-test:functional-20210609010438-9941 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:40:55.410544 344705 cache_images.go:106] "minikube-local-cache-test:functional-20210609010438-9941" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210609010438-9941
I0609 01:40:55.410583 344705 docker.go:236] Removing image: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:40:55.410638 344705 ssh_runner.go:149] Run: docker rmi minikube-local-cache-test:functional-20210609010438-9941
I0609 01:40:55.448411 344705 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:40:55.448506 344705 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:40:55.451714 344705 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941': No such file or directory
I0609 01:40:55.451745 344705 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941 (5120 bytes)
I0609 01:40:55.471575 344705 docker.go:203] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:40:55.471628 344705 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:40:55.762458 344705 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 from cache
I0609 01:40:55.762495 344705 cache_images.go:113] Successfully loaded all cached images
I0609 01:40:55.762502 344705 cache_images.go:82] LoadImages completed in 2.407848633s
I0609 01:40:55.762517 344705 cache_images.go:252] succeeded pushing to: cilium-20210609012810-9941
I0609 01:40:55.762522 344705 cache_images.go:253] failed pushing to:
I0609 01:40:57.446509 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:40:59.919287 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:00.317663 352096 out.go:197] - Generating certificates and keys ...
I0609 01:41:00.320816 352096 out.go:197] - Booting up control plane ...
I0609 01:41:00.323612 352096 out.go:197] - Configuring RBAC rules ...
I0609 01:41:00.325728 352096 cni.go:93] Creating CNI manager for "calico"
I0609 01:41:00.327397 352096 out.go:170] * Configuring Calico (Container Networking Interface) ...
I0609 01:41:00.327463 352096 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.7/kubectl ...
I0609 01:41:00.327482 352096 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22544 bytes)
I0609 01:41:00.355615 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0609 01:41:01.345873 352096 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0609 01:41:01.346015 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:01.346096 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl label nodes minikube.k8s.io/version=v1.21.0-beta.0 minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc minikube.k8s.io/name=calico-20210609012810-9941 minikube.k8s.io/updated_at=2021_06_09T01_41_01_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:40:58.423166 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:01.474794 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:40:59.218044 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:40:59.218071 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:59.218077 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:59.218084 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:40:59.218089 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:40:59.218101 300573 retry.go:31] will retry after 2.410580953s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:41:01.632429 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:41:01.632456 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:01.632462 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:01.632469 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:01.632476 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:01.632489 300573 retry.go:31] will retry after 3.437877504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:41:02.460409 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:04.920306 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:01.767984 352096 ops.go:34] apiserver oom_adj: -16
I0609 01:41:01.768084 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:02.480180 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:02.980220 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:03.480904 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:03.980208 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:04.480690 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:04.980710 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:05.480647 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:05.979985 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:06.480212 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:04.521744 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:05.073834 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:41:05.073863 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:05.073868 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:05.073876 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:05.073881 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:05.073895 300573 retry.go:31] will retry after 3.261655801s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:41:08.339005 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:41:08.339042 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:08.339049 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:08.339061 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:08.339067 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:08.339081 300573 retry.go:31] will retry after 4.086092664s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:41:07.419175 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:09.443670 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:06.980032 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:07.480282 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:07.980274 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:08.480263 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:08.980571 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:09.480813 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:09.980588 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:10.480840 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:10.980186 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:11.480965 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:07.580079 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:10.622741 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:13.117286 300573 system_pods.go:86] 4 kube-system pods found
I0609 01:41:13.117320 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:13.117328 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:13.117340 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:13.117348 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:13.117364 300573 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0609 01:41:13.726560 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:11.980058 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:13.480528 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:13.980786 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:15.479870 352096 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.499049149s)
I0609 01:41:15.479969 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:16.480635 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:13.666259 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:16.715529 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:16.980322 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:17.480064 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:17.980779 352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:18.071429 352096 kubeadm.go:985] duration metric: took 16.725453565s to wait for elevateKubeSystemPrivileges.
I0609 01:41:18.071462 352096 kubeadm.go:392] StartCluster complete in 32.919320287s
I0609 01:41:18.071483 352096 settings.go:142] acquiring lock: {Name:mk8746ecf7d8ca6a3508d1e45e55db2314c0e73d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:18.071570 352096 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
I0609 01:41:18.073757 352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig: {Name:mk288d2c4fafd90028bf76db1824dfec28d92db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:18.664569 352096 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20210609012810-9941" rescaled to 1
I0609 01:41:18.664632 352096 start.go:214] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
I0609 01:41:18.664651 352096 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0609 01:41:18.664714 352096 addons.go:342] enableAddons start: toEnable=map[], additional=[]
I0609 01:41:18.666538 352096 out.go:170] * Verifying Kubernetes components...
I0609 01:41:18.664779 352096 addons.go:59] Setting storage-provisioner=true in profile "calico-20210609012810-9941"
I0609 01:41:18.666596 352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0609 01:41:18.666612 352096 addons.go:135] Setting addon storage-provisioner=true in "calico-20210609012810-9941"
W0609 01:41:18.666630 352096 addons.go:147] addon storage-provisioner should already be in state true
I0609 01:41:18.664791 352096 addons.go:59] Setting default-storageclass=true in profile "calico-20210609012810-9941"
I0609 01:41:18.666671 352096 host.go:66] Checking if "calico-20210609012810-9941" exists ...
I0609 01:41:18.666676 352096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20210609012810-9941"
I0609 01:41:18.664965 352096 cache.go:108] acquiring lock: {Name:mk2dd9808d496cd84c38482eea9e354a60be2885 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0609 01:41:18.666833 352096 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 exists
I0609 01:41:18.666855 352096 cache.go:97] cache image "minikube-local-cache-test:functional-20210609010438-9941" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941" took 1.89821ms
I0609 01:41:18.666869 352096 cache.go:81] save to tar file minikube-local-cache-test:functional-20210609010438-9941 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 succeeded
I0609 01:41:18.666879 352096 cache.go:88] Successfully saved all images to host disk.
I0609 01:41:18.667046 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:41:18.667251 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:41:18.667265 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:41:18.711328 352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:41:18.711376 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:41:16.464152 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:18.919739 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:18.722674 352096 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0609 01:41:18.722788 352096 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0609 01:41:18.722802 352096 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0609 01:41:18.722851 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:41:18.758518 352096 addons.go:135] Setting addon default-storageclass=true in "calico-20210609012810-9941"
W0609 01:41:18.758544 352096 addons.go:147] addon default-storageclass should already be in state true
I0609 01:41:18.758573 352096 host.go:66] Checking if "calico-20210609012810-9941" exists ...
I0609 01:41:18.759066 352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
I0609 01:41:18.770750 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:41:18.794220 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:41:18.806700 352096 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0609 01:41:18.806724 352096 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0609 01:41:18.806770 352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
I0609 01:41:18.861723 352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
I0609 01:41:19.254824 352096 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0609 01:41:19.257472 352096 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0609 01:41:19.269050 352096 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0609 01:41:19.269206 352096 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:41:19.269224 352096 docker.go:541] minikube-local-cache-test:functional-20210609010438-9941 wasn't preloaded
I0609 01:41:19.269233 352096 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210609010438-9941]
I0609 01:41:19.270563 352096 node_ready.go:35] waiting up to 5m0s for node "calico-20210609012810-9941" to be "Ready" ...
I0609 01:41:19.270617 352096 image.go:133] retrieving image: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:41:19.270639 352096 image.go:139] checking repository: index.docker.io/library/minikube-local-cache-test
I0609 01:41:19.344594 352096 node_ready.go:49] node "calico-20210609012810-9941" has status "Ready":"True"
I0609 01:41:19.344625 352096 node_ready.go:38] duration metric: took 74.017948ms waiting for node "calico-20210609012810-9941" to be "Ready" ...
I0609 01:41:19.344637 352096 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0609 01:41:19.359631 352096 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace to be "Ready" ...
W0609 01:41:20.095801 352096 image.go:146] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
I0609 01:41:20.095863 352096 image.go:147] short name: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:41:20.096813 352096 image.go:175] daemon lookup for minikube-local-cache-test:functional-20210609010438-9941: Error response from daemon: reference does not exist
I0609 01:41:20.438848 352096 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18134229s)
I0609 01:41:20.438935 352096 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.169850353s)
I0609 01:41:20.438963 352096 start.go:725] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
I0609 01:41:20.441405 352096 out.go:170] * Enabled addons: default-storageclass, storage-provisioner
I0609 01:41:20.441438 352096 addons.go:344] enableAddons completed in 1.776732349s
W0609 01:41:20.710811 352096 image.go:185] authn lookup for minikube-local-cache-test:functional-20210609010438-9941 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:41:21.301766 352096 image.go:189] remote lookup for minikube-local-cache-test:functional-20210609010438-9941: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:41:21.301819 352096 image.go:92] error retrieve Image minikube-local-cache-test:functional-20210609010438-9941 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
I0609 01:41:21.301851 352096 cache_images.go:106] "minikube-local-cache-test:functional-20210609010438-9941" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210609010438-9941
I0609 01:41:21.301896 352096 docker.go:236] Removing image: minikube-local-cache-test:functional-20210609010438-9941
I0609 01:41:21.301940 352096 ssh_runner.go:149] Run: docker rmi minikube-local-cache-test:functional-20210609010438-9941
I0609 01:41:21.448602 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:21.464097 352096 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:41:21.464209 352096 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:41:21.467662 352096 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941': No such file or directory
I0609 01:41:21.467695 352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941 (5120 bytes)
I0609 01:41:21.553071 352096 docker.go:203] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:41:21.553158 352096 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
I0609 01:41:19.755463 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:19.524872 300573 system_pods.go:86] 7 kube-system pods found
I0609 01:41:19.524911 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:19.524921 300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Pending
I0609 01:41:19.524931 300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Pending
I0609 01:41:19.524938 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:19.524948 300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:19.524961 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:19.524978 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:19.524996 300573 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-apiserver, kube-controller-manager
I0609 01:41:21.419636 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:23.919505 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:21.913966 352096 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 from cache
I0609 01:41:21.914009 352096 cache_images.go:113] Successfully loaded all cached images
I0609 01:41:21.914025 352096 cache_images.go:82] LoadImages completed in 2.644783095s
I0609 01:41:21.914043 352096 cache_images.go:252] succeeded pushing to: calico-20210609012810-9941
I0609 01:41:21.914049 352096 cache_images.go:253] failed pushing to:
I0609 01:41:23.875804 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:25.876212 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:22.798808 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:25.839455 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:25.592272 300573 system_pods.go:86] 7 kube-system pods found
I0609 01:41:25.592298 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:25.592304 300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Pending
I0609 01:41:25.592308 300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Pending
I0609 01:41:25.592311 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:25.592317 300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:25.592325 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:25.592331 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:25.592342 300573 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-apiserver, kube-controller-manager
I0609 01:41:25.919767 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:28.419788 344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:28.920252 344705 pod_ready.go:92] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:28.920277 344705 pod_ready.go:81] duration metric: took 36.018972007s waiting for pod "cilium-2rdhk" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.920288 344705 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.924675 344705 pod_ready.go:92] pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:28.924691 344705 pod_ready.go:81] duration metric: took 4.397091ms waiting for pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.924702 344705 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.929071 344705 pod_ready.go:92] pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:28.929091 344705 pod_ready.go:81] duration metric: took 4.382306ms waiting for pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.929102 344705 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace to be "Ready" ...
I0609 01:41:28.931060 344705 pod_ready.go:97] error getting pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-jv4pl" not found
I0609 01:41:28.931084 344705 pod_ready.go:81] duration metric: took 1.975143ms waiting for pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace to be "Ready" ...
E0609 01:41:28.931095 344705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-jv4pl" not found
I0609 01:41:28.931103 344705 pod_ready.go:78] waiting up to 5m0s for pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:27.876306 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:30.376138 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:28.884648 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:31.933672 329232 stop.go:59] stop err: Maximum number of retries (60) exceeded
I0609 01:41:31.933729 329232 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
I0609 01:41:31.934195 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
W0609 01:41:31.985166 329232 delete.go:135] deletehost failed: Docker machine "auto-20210609012809-9941" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0609 01:41:31.985255 329232 cli_runner.go:115] Run: docker container inspect -f {{.Id}} auto-20210609012809-9941
I0609 01:41:32.031852 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:32.081551 329232 cli_runner.go:115] Run: docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0"
W0609 01:41:32.125884 329232 cli_runner.go:162] docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0" returned with exit code 1
I0609 01:41:32.125930 329232 oci.go:632] error shutdown auto-20210609012809-9941: docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0": exit status 1
stdout:
stderr:
Error response from daemon: Container bc54bc9bf415ee2bb0df1bcad0aed4e971bd39991c0782ffae750733117660bd is not running
I0609 01:41:33.127009 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:33.188615 329232 oci.go:646] temporary error: container auto-20210609012809-9941 status is but expect it to be exited
I0609 01:41:33.188641 329232 oci.go:652] Successfully shutdown container auto-20210609012809-9941
I0609 01:41:33.188680 329232 cli_runner.go:115] Run: docker rm -f -v auto-20210609012809-9941
I0609 01:41:33.232875 329232 cli_runner.go:115] Run: docker container inspect -f {{.Id}} auto-20210609012809-9941
W0609 01:41:33.278916 329232 cli_runner.go:162] docker container inspect -f {{.Id}} auto-20210609012809-9941 returned with exit code 1
I0609 01:41:33.279004 329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0609 01:41:33.317124 329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0609 01:41:33.317184 329232 network_create.go:255] running [docker network inspect auto-20210609012809-9941] to gather additional debugging logs...
I0609 01:41:33.317205 329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941
W0609 01:41:33.354864 329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 returned with exit code 1
I0609 01:41:33.354894 329232 network_create.go:258] error running [docker network inspect auto-20210609012809-9941]: docker network inspect auto-20210609012809-9941: exit status 1
stdout:
[]
stderr:
Error: No such network: auto-20210609012809-9941
I0609 01:41:33.354910 329232 network_create.go:260] output of [docker network inspect auto-20210609012809-9941]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: auto-20210609012809-9941
** /stderr **
W0609 01:41:33.355033 329232 delete.go:139] delete failed (probably ok) <nil>
I0609 01:41:33.355043 329232 fix.go:120] Sleeping 1 second for extra luck!
I0609 01:41:34.355909 329232 start.go:126] createHost starting for "" (driver="docker")
I0609 01:41:30.941410 344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:32.942019 344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:34.942818 344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:32.377229 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:34.876436 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:34.358151 329232 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0609 01:41:34.358255 329232 start.go:160] libmachine.API.Create for "auto-20210609012809-9941" (driver="docker")
I0609 01:41:34.358292 329232 client.go:168] LocalClient.Create starting
I0609 01:41:34.358357 329232 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem
I0609 01:41:34.358386 329232 main.go:128] libmachine: Decoding PEM data...
I0609 01:41:34.358404 329232 main.go:128] libmachine: Parsing certificate...
I0609 01:41:34.358508 329232 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem
I0609 01:41:34.358532 329232 main.go:128] libmachine: Decoding PEM data...
I0609 01:41:34.358541 329232 main.go:128] libmachine: Parsing certificate...
I0609 01:41:34.358756 329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0609 01:41:34.402255 329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0609 01:41:34.402349 329232 network_create.go:255] running [docker network inspect auto-20210609012809-9941] to gather additional debugging logs...
I0609 01:41:34.402373 329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941
W0609 01:41:34.447755 329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 returned with exit code 1
I0609 01:41:34.447782 329232 network_create.go:258] error running [docker network inspect auto-20210609012809-9941]: docker network inspect auto-20210609012809-9941: exit status 1
stdout:
[]
stderr:
Error: No such network: auto-20210609012809-9941
I0609 01:41:34.447793 329232 network_create.go:260] output of [docker network inspect auto-20210609012809-9941]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: auto-20210609012809-9941
** /stderr **
I0609 01:41:34.447829 329232 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0609 01:41:34.487524 329232 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3efa0710be1e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6f:c7:95:89}}
I0609 01:41:34.488287 329232 network.go:215] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-494a1c72530c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:2d:51:70:a3}}
I0609 01:41:34.489047 329232 network.go:215] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-3b40e12707af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ac:37:f7:3a}}
I0609 01:41:34.489905 329232 network.go:263] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000136218 192.168.76.0:0xc000408548] misses:0}
I0609 01:41:34.489944 329232 network.go:210] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0609 01:41:34.489977 329232 network_create.go:106] attempt to create docker network auto-20210609012809-9941 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0609 01:41:34.490049 329232 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210609012809-9941
I0609 01:41:34.563866 329232 network_create.go:90] docker network auto-20210609012809-9941 192.168.76.0/24 created
I0609 01:41:34.563896 329232 kic.go:106] calculated static IP "192.168.76.2" for the "auto-20210609012809-9941" container
I0609 01:41:34.563950 329232 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0609 01:41:34.605010 329232 cli_runner.go:115] Run: docker volume create auto-20210609012809-9941 --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --label created_by.minikube.sigs.k8s.io=true
I0609 01:41:34.642891 329232 oci.go:102] Successfully created a docker volume auto-20210609012809-9941
I0609 01:41:34.642974 329232 cli_runner.go:115] Run: docker run --rm --name auto-20210609012809-9941-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --entrypoint /usr/bin/test -v auto-20210609012809-9941:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib
I0609 01:41:35.363820 329232 oci.go:106] Successfully prepared a docker volume auto-20210609012809-9941
W0609 01:41:35.363866 329232 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0609 01:41:35.363875 329232 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0609 01:41:35.363883 329232 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0609 01:41:35.363916 329232 kic.go:179] Starting extracting preloaded images to volume ...
I0609 01:41:35.363930 329232 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0609 01:41:35.363995 329232 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210609012809-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir
I0609 01:41:35.467993 329232 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210609012809-9941 --name auto-20210609012809-9941 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210609012809-9941 --network auto-20210609012809-9941 --ip 192.168.76.2 --volume auto-20210609012809-9941:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45
I0609 01:41:35.995981 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Running}}
I0609 01:41:36.052103 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:36.105861 329232 cli_runner.go:115] Run: docker exec auto-20210609012809-9941 stat /var/lib/dpkg/alternatives/iptables
I0609 01:41:36.272972 329232 oci.go:278] the created container "auto-20210609012809-9941" has a running status.
I0609 01:41:36.273013 329232 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa...
I0609 01:41:36.425757 329232 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0609 01:41:36.825610 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:36.868189 329232 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0609 01:41:36.868214 329232 kic_runner.go:115] Args: [docker exec --privileged auto-20210609012809-9941 chown docker:docker /home/docker/.ssh/authorized_keys]
I0609 01:41:36.102263 300573 system_pods.go:86] 8 kube-system pods found
I0609 01:41:36.102300 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102308 300573 system_pods.go:89] "etcd-old-k8s-version-20210609012901-9941" [d1de6264-c8c3-11eb-a78f-02427f02d9a2] Pending
I0609 01:41:36.102315 300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102323 300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102329 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102336 300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102347 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:36.102364 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:36.102381 300573 retry.go:31] will retry after 12.194240946s: missing components: etcd
I0609 01:41:37.093269 344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:39.442809 344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:39.940516 344705 pod_ready.go:92] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:39.940545 344705 pod_ready.go:81] duration metric: took 11.009433469s waiting for pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:39.940560 344705 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:39.944617 344705 pod_ready.go:92] pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:39.944633 344705 pod_ready.go:81] duration metric: took 4.066455ms waiting for pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:39.944642 344705 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:37.080706 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:39.379466 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:41.383974 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:39.584397 329232 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210609012809-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir: (4.220346647s)
I0609 01:41:39.584427 329232 kic.go:188] duration metric: took 4.220510 seconds to extract preloaded images to volume
I0609 01:41:39.584497 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
I0609 01:41:39.635769 329232 machine.go:88] provisioning docker machine ...
I0609 01:41:39.635827 329232 ubuntu.go:169] provisioning hostname "auto-20210609012809-9941"
I0609 01:41:39.635904 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:39.684460 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:39.684645 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:39.684660 329232 main.go:128] libmachine: About to run SSH command:
sudo hostname auto-20210609012809-9941 && echo "auto-20210609012809-9941" | sudo tee /etc/hostname
I0609 01:41:39.841506 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: auto-20210609012809-9941
I0609 01:41:39.841577 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:39.885725 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:39.885870 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:39.885889 329232 main.go:128] libmachine: About to run SSH command:
if ! grep -xq '.*\sauto-20210609012809-9941' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210609012809-9941/g' /etc/hosts;
else
echo '127.0.1.1 auto-20210609012809-9941' | sudo tee -a /etc/hosts;
fi
fi
I0609 01:41:40.009081 329232 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0609 01:41:40.009113 329232 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
I0609 01:41:40.009136 329232 ubuntu.go:177] setting up certificates
I0609 01:41:40.009147 329232 provision.go:83] configureAuth start
I0609 01:41:40.009201 329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
I0609 01:41:40.054568 329232 provision.go:137] copyHostCerts
I0609 01:41:40.054639 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
I0609 01:41:40.054650 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
I0609 01:41:40.054702 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
I0609 01:41:40.054772 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
I0609 01:41:40.054816 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
I0609 01:41:40.054836 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
I0609 01:41:40.054888 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
I0609 01:41:40.054896 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
I0609 01:41:40.054916 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
I0609 01:41:40.054956 329232 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.auto-20210609012809-9941 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210609012809-9941]
I0609 01:41:40.199140 329232 provision.go:171] copyRemoteCerts
I0609 01:41:40.199207 329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0609 01:41:40.199267 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:40.240189 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:40.339747 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0609 01:41:40.358551 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0609 01:41:40.377700 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
I0609 01:41:40.396157 329232 provision.go:86] duration metric: configureAuth took 386.999034ms
I0609 01:41:40.396180 329232 ubuntu.go:193] setting minikube options for container-runtime
I0609 01:41:40.396396 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:40.437678 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:40.437928 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:40.437947 329232 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0609 01:41:40.565938 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
I0609 01:41:40.565966 329232 ubuntu.go:71] root file system type: overlay
I0609 01:41:40.566224 329232 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0609 01:41:40.566318 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:40.609110 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:40.609254 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:40.609318 329232 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0609 01:41:40.742784 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0609 01:41:40.742865 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:40.799645 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:40.799898 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:40.799934 329232 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0609 01:41:41.471089 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-09 01:41:40.733754700 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0609 01:41:41.471128 329232 machine.go:91] provisioned docker machine in 1.835332676s
I0609 01:41:41.471143 329232 client.go:171] LocalClient.Create took 7.112842351s
I0609 01:41:41.471164 329232 start.go:168] duration metric: libmachine.API.Create for "auto-20210609012809-9941" took 7.112906767s
I0609 01:41:41.471179 329232 start.go:267] post-start starting for "auto-20210609012809-9941" (driver="docker")
I0609 01:41:41.471186 329232 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0609 01:41:41.471252 329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0609 01:41:41.471302 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:41.519729 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:41.609111 329232 ssh_runner.go:149] Run: cat /etc/os-release
I0609 01:41:41.611701 329232 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0609 01:41:41.611732 329232 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0609 01:41:41.611740 329232 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0609 01:41:41.611745 329232 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0609 01:41:41.611753 329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
I0609 01:41:41.611793 329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
I0609 01:41:41.611879 329232 start.go:270] post-start completed in 140.693775ms
I0609 01:41:41.612136 329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
I0609 01:41:41.660654 329232 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/config.json ...
I0609 01:41:41.660931 329232 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0609 01:41:41.660996 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:41.708265 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:41.793790 329232 start.go:129] duration metric: createHost completed in 7.437849081s
I0609 01:41:41.793878 329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
W0609 01:41:41.834734 329232 fix.go:134] unexpected machine state, will restart: <nil>
I0609 01:41:41.834764 329232 machine.go:88] provisioning docker machine ...
I0609 01:41:41.834786 329232 ubuntu.go:169] provisioning hostname "auto-20210609012809-9941"
I0609 01:41:41.834833 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:41.879476 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:41.879641 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:41.879661 329232 main.go:128] libmachine: About to run SSH command:
sudo hostname auto-20210609012809-9941 && echo "auto-20210609012809-9941" | sudo tee /etc/hostname
I0609 01:41:42.011151 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: auto-20210609012809-9941
I0609 01:41:42.011225 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:42.061407 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:42.061641 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:42.061675 329232 main.go:128] libmachine: About to run SSH command:
if ! grep -xq '.*\sauto-20210609012809-9941' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210609012809-9941/g' /etc/hosts;
else
echo '127.0.1.1 auto-20210609012809-9941' | sudo tee -a /etc/hosts;
fi
fi
I0609 01:41:42.184948 329232 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0609 01:41:42.184977 329232 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
I0609 01:41:42.185001 329232 ubuntu.go:177] setting up certificates
I0609 01:41:42.185011 329232 provision.go:83] configureAuth start
I0609 01:41:42.185062 329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
I0609 01:41:42.223424 329232 provision.go:137] copyHostCerts
I0609 01:41:42.223473 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
I0609 01:41:42.223480 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
I0609 01:41:42.223524 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
I0609 01:41:42.223592 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
I0609 01:41:42.223605 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
I0609 01:41:42.223629 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
I0609 01:41:42.223679 329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
I0609 01:41:42.223689 329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
I0609 01:41:42.223706 329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
I0609 01:41:42.223802 329232 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.auto-20210609012809-9941 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210609012809-9941]
I0609 01:41:42.486214 329232 provision.go:171] copyRemoteCerts
I0609 01:41:42.486276 329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0609 01:41:42.486327 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:42.526157 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:42.612850 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0609 01:41:42.630046 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0609 01:41:42.647341 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
I0609 01:41:42.663823 329232 provision.go:86] duration metric: configureAuth took 478.797993ms
I0609 01:41:42.663855 329232 ubuntu.go:193] setting minikube options for container-runtime
I0609 01:41:42.664049 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:42.708962 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:42.709147 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:42.709164 329232 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0609 01:41:42.837104 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
I0609 01:41:42.837131 329232 ubuntu.go:71] root file system type: overlay
I0609 01:41:42.837293 329232 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0609 01:41:42.837345 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:42.884564 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:42.884726 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:42.884819 329232 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0609 01:41:43.017785 329232 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0609 01:41:43.017862 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:43.058769 329232 main.go:128] libmachine: Using SSH client type: native
I0609 01:41:43.058909 329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil> [] 0s} 127.0.0.1 32990 <nil> <nil>}
I0609 01:41:43.058927 329232 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0609 01:41:43.180717 329232 main.go:128] libmachine: SSH cmd err, output: <nil>:
I0609 01:41:43.180750 329232 machine.go:91] provisioned docker machine in 1.345979023s
I0609 01:41:43.180763 329232 start.go:267] post-start starting for "auto-20210609012809-9941" (driver="docker")
I0609 01:41:43.180773 329232 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0609 01:41:43.180829 329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0609 01:41:43.180871 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:43.220933 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:43.308831 329232 ssh_runner.go:149] Run: cat /etc/os-release
I0609 01:41:43.311629 329232 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0609 01:41:43.311653 329232 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0609 01:41:43.311664 329232 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0609 01:41:43.311671 329232 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0609 01:41:43.311681 329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
I0609 01:41:43.311732 329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
I0609 01:41:43.311850 329232 start.go:270] post-start completed in 131.0789ms
I0609 01:41:43.311895 329232 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0609 01:41:43.311938 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:43.351864 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:43.439589 329232 fix.go:57] fixHost completed within 3m18.46145985s
I0609 01:41:43.439614 329232 start.go:80] releasing machines lock for "auto-20210609012809-9941", held for 3m18.461506998s
I0609 01:41:43.439689 329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
I0609 01:41:43.480908 329232 ssh_runner.go:149] Run: sudo service containerd status
I0609 01:41:43.480953 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:43.480998 329232 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0609 01:41:43.481050 329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
I0609 01:41:43.523337 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:43.523672 329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
I0609 01:41:43.625901 329232 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0609 01:41:43.634199 329232 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0609 01:41:43.634259 329232 ssh_runner.go:149] Run: sudo service crio status
I0609 01:41:43.651967 329232 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0609 01:41:43.663538 329232 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0609 01:41:43.671774 329232 ssh_runner.go:149] Run: sudo service docker status
I0609 01:41:43.685805 329232 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0609 01:41:41.955318 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:44.454390 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:43.733795 329232 out.go:197] * Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
I0609 01:41:43.733887 329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0609 01:41:43.781233 329232 ssh_runner.go:149] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0609 01:41:43.784669 329232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0609 01:41:43.794580 329232 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.crt
I0609 01:41:43.794703 329232 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.key
I0609 01:41:43.794837 329232 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0609 01:41:43.794899 329232 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:41:43.836439 329232 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:41:43.836465 329232 docker.go:466] Images already preloaded, skipping extraction
I0609 01:41:43.836518 329232 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0609 01:41:43.874900 329232 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0609 01:41:43.874929 329232 cache_images.go:74] Images are preloaded, skipping loading
I0609 01:41:43.874987 329232 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0609 01:41:43.959341 329232 cni.go:93] Creating CNI manager for ""
I0609 01:41:43.959363 329232 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0609 01:41:43.959373 329232 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0609 01:41:43.959385 329232 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210609012809-9941 NodeName:auto-20210609012809-9941 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0609 01:41:43.959528 329232 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "auto-20210609012809-9941"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
I0609 01:41:43.959623 329232 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=auto-20210609012809-9941 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.7 ClusterName:auto-20210609012809-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0609 01:41:43.959678 329232 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0609 01:41:43.966644 329232 binaries.go:44] Found k8s binaries, skipping transfer
I0609 01:41:43.966767 329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
I0609 01:41:43.973306 329232 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
I0609 01:41:43.985377 329232 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0609 01:41:43.996832 329232 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1883 bytes)
I0609 01:41:44.008194 329232 ssh_runner.go:316] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
I0609 01:41:44.019580 329232 ssh_runner.go:316] scp memory --> /etc/init.d/kubelet (839 bytes)
I0609 01:41:44.031187 329232 ssh_runner.go:149] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0609 01:41:44.033902 329232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0609 01:41:44.042089 329232 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941 for IP: 192.168.76.2
I0609 01:41:44.042136 329232 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key
I0609 01:41:44.042171 329232 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key
I0609 01:41:44.042229 329232 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.key
I0609 01:41:44.042250 329232 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25
I0609 01:41:44.042257 329232 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0609 01:41:44.226573 329232 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 ...
I0609 01:41:44.226606 329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25: {Name:mk90ec242a66bfd79902e518464ceb62421bad6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:44.226771 329232 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25 ...
I0609 01:41:44.226783 329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25: {Name:mkfae0a3bd896dd88f44a8261ced590d5cf2eaf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:44.226857 329232 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt
I0609 01:41:44.226912 329232 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key
I0609 01:41:44.226968 329232 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key
I0609 01:41:44.226982 329232 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt with IP's: []
I0609 01:41:44.493832 329232 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt ...
I0609 01:41:44.493863 329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt: {Name:mkb1a9418c2d79591044d594bd7bb611a67d607c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:44.494045 329232 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key ...
I0609 01:41:44.494060 329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key: {Name:mkadb2ec9513a5b1c87d24f9a0d9353126c956ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0609 01:41:44.494231 329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem (1338 bytes)
W0609 01:41:44.494272 329232 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941_empty.pem, impossibly tiny 0 bytes
I0609 01:41:44.494299 329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem (1675 bytes)
I0609 01:41:44.494326 329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem (1082 bytes)
I0609 01:41:44.494386 329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem (1123 bytes)
I0609 01:41:44.494417 329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem (1679 bytes)
I0609 01:41:44.495301 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0609 01:41:44.513759 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0609 01:41:44.556375 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0609 01:41:44.574638 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0609 01:41:44.590891 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0609 01:41:44.607761 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0609 01:41:44.624984 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0609 01:41:44.641979 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0609 01:41:44.661420 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem --> /usr/share/ca-certificates/9941.pem (1338 bytes)
I0609 01:41:44.679420 329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0609 01:41:44.697286 329232 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
I0609 01:41:44.709772 329232 ssh_runner.go:149] Run: openssl version
I0609 01:41:44.714441 329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9941.pem && ln -fs /usr/share/ca-certificates/9941.pem /etc/ssl/certs/9941.pem"
I0609 01:41:44.721420 329232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9941.pem
I0609 01:41:44.724999 329232 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jun 9 01:04 /usr/share/ca-certificates/9941.pem
I0609 01:41:44.725051 329232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9941.pem
I0609 01:41:44.730221 329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9941.pem /etc/ssl/certs/51391683.0"
I0609 01:41:44.738018 329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0609 01:41:44.744990 329232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0609 01:41:44.747847 329232 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jun 9 00:58 /usr/share/ca-certificates/minikubeCA.pem
I0609 01:41:44.747885 329232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0609 01:41:44.752327 329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0609 01:41:44.759007 329232 kubeadm.go:390] StartCluster: {Name:auto-20210609012809-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:auto-20210609012809-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0609 01:41:44.759106 329232 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0609 01:41:44.801843 329232 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0609 01:41:44.810329 329232 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0609 01:41:44.818129 329232 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0609 01:41:44.818183 329232 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0609 01:41:44.825259 329232 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0609 01:41:44.825307 329232 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0609 01:41:43.875536 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:46.376745 352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:45.588110 329232 out.go:197] - Generating certificates and keys ...
I0609 01:41:48.300953 300573 system_pods.go:86] 8 kube-system pods found
I0609 01:41:48.300985 300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.300993 300573 system_pods.go:89] "etcd-old-k8s-version-20210609012901-9941" [d1de6264-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301000 300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301006 300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301013 300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301020 300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301031 300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0609 01:41:48.301043 300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
I0609 01:41:48.301053 300573 system_pods.go:126] duration metric: took 56.76990207s to wait for k8s-apps to be running ...
I0609 01:41:48.301068 300573 system_svc.go:44] waiting for kubelet service to be running ....
I0609 01:41:48.301114 300573 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0609 01:41:48.310381 300573 system_svc.go:56] duration metric: took 9.307261ms WaitForService to wait for kubelet.
I0609 01:41:48.310405 300573 kubeadm.go:547] duration metric: took 1m14.727322076s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0609 01:41:48.310424 300573 node_conditions.go:102] verifying NodePressure condition ...
I0609 01:41:48.312372 300573 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
I0609 01:41:48.312391 300573 node_conditions.go:123] node cpu capacity is 8
I0609 01:41:48.312404 300573 node_conditions.go:105] duration metric: took 1.974952ms to run NodePressure ...
I0609 01:41:48.312415 300573 start.go:219] waiting for startup goroutines ...
I0609 01:41:48.356569 300573 start.go:463] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
I0609 01:41:48.358565 300573 out.go:170]
W0609 01:41:48.358730 300573 out.go:235] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
I0609 01:41:48.360236 300573 out.go:170] - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
I0609 01:41:48.361792 300573 out.go:170] * Done! kubectl is now configured to use "old-k8s-version-20210609012901-9941" cluster and "default" namespace by default
I0609 01:41:46.954352 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:48.955130 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:47.875252 352096 pod_ready.go:92] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:47.875281 352096 pod_ready.go:81] duration metric: took 28.515609073s waiting for pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace to be "Ready" ...
I0609 01:41:47.875297 352096 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-8bhjk" in "kube-system" namespace to be "Ready" ...
I0609 01:41:49.886712 352096 pod_ready.go:92] pod "calico-node-8bhjk" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:49.886740 352096 pod_ready.go:81] duration metric: took 2.011435025s waiting for pod "calico-node-8bhjk" in "kube-system" namespace to be "Ready" ...
I0609 01:41:49.886752 352096 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace to be "Ready" ...
I0609 01:41:47.864552 329232 out.go:197] - Booting up control plane ...
I0609 01:41:50.955197 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:53.456163 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:51.896789 352096 pod_ready.go:92] pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace has status "Ready":"True"
I0609 01:41:51.896811 352096 pod_ready.go:81] duration metric: took 2.010052283s waiting for pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace to be "Ready" ...
I0609 01:41:51.896821 352096 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-qc224" in "kube-system" namespace to be "Ready" ...
I0609 01:41:51.898882 352096 pod_ready.go:97] error getting pod "coredns-74ff55c5b-qc224" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-qc224" not found
I0609 01:41:51.898909 352096 pod_ready.go:81] duration metric: took 2.080404ms waiting for pod "coredns-74ff55c5b-qc224" in "kube-system" namespace to be "Ready" ...
E0609 01:41:51.898919 352096 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-qc224" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-qc224" not found
I0609 01:41:51.898928 352096 pod_ready.go:78] waiting up to 5m0s for pod "etcd-calico-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
I0609 01:41:53.907845 352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:55.911876 352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:55.954929 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:57.955126 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:59.956675 344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:58.408965 352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:42:00.909845 352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
I0609 01:41:57.536931 329232 out.go:197] - Configuring RBAC rules ...
I0609 01:41:57.950447 329232 cni.go:93] Creating CNI manager for ""
I0609 01:41:57.950472 329232 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0609 01:41:57.950504 329232 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0609 01:41:57.950565 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:57.950588 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl label nodes minikube.k8s.io/version=v1.21.0-beta.0 minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc minikube.k8s.io/name=auto-20210609012809-9941 minikube.k8s.io/updated_at=2021_06_09T01_41_57_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:58.270674 329232 ops.go:34] apiserver oom_adj: -16
I0609 01:41:58.270873 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:58.834789 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:59.334848 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:41:59.834836 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:42:00.334592 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:42:00.835312 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:42:01.335240 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:42:01.834799 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0609 01:42:02.334849 329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
*
* ==> Docker <==
* -- Logs begin at Wed 2021-06-09 01:34:39 UTC, end at Wed 2021-06-09 01:42:05 UTC. --
Jun 09 01:40:02 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:02.779605017Z" level=info msg="ignoring event" container=cc0aca83efeca0d2b5a6380f0035838137a5ddede617bb12397795175054b95c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.115851734Z" level=info msg="ignoring event" container=5e67ef29fd782e6882093cefc8d1b2e4e6502289a8aab7eb602baa78ff03d4df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.244359054Z" level=info msg="ignoring event" container=647284240c9b3ff26c1e5d787021349e374f04b87d9f0c78f0972878ca393ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.376184625Z" level=info msg="ignoring event" container=8a1abb294bc93b7aeb07164f4e6a549e477648e117418f2e94e2b62b742a603f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.503253921Z" level=info msg="ignoring event" container=a8f1d2a6258c19eb81fe707363ba95a59689f2623e07e372b5f44056f81b71b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.655460364Z" level=info msg="ignoring event" container=0a42e38b95e96fac8c84fbd6415b07279c3f7b4dc175292ee03bf72f93504bff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.868060101Z" level=info msg="ignoring event" container=8f37f3879958d7bcfb1fb37da48178584862829d0f9ab46e57d49320f37fc3f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:04 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:04.043079624Z" level=info msg="ignoring event" container=83d747333959a40a15d16276795b19088263280ab507d0e39ebf3009f9cd7290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:04 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:04.194657529Z" level=info msg="ignoring event" container=76c2df28bafa15f4875a399fd3f8bde03a6e76c0e021ffe56eb96ee35045923f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:36 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:36.611806519Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Jun 09 01:40:37 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:37.093237111Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
Jun 09 01:40:37 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:37.256429752Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.432301024Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.432343163Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.433989922Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.749379613Z" level=info msg="ignoring event" container=209b2f1f12c840e229b4ae712cd7def2451c3e705cd6cf899ed05d4cae0c0929 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:43 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:43.034860759Z" level=info msg="ignoring event" container=e15298565a01a44ba2e81fbb337da50279e879415a5091222be3a5e36aee08d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.032186534Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.032222718Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.041807409Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:01 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:01.346826619Z" level=info msg="ignoring event" container=417a2459ca5d2c0a4e1befd352a48e44dc91fb4015fe574d929d8c1097ce09cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.038495294Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.038537670Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.040714461Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:34 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:34.345802355Z" level=info msg="ignoring event" container=0a878f155b99161e7c0c238df1d2ea55fb150f549896a43282d60c2825d2e0ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
0a878f155b991 a90209bb39e3d 31 seconds ago Exited dashboard-metrics-scraper 3 7b28bd8313edd
9230420d066a0 9a07b5b4bfac0 About a minute ago Running kubernetes-dashboard 0 52cb0877bbe76
80656451acc2e eb516548c180f About a minute ago Running coredns 0 b82c08bb91986
d27ec4783cae5 6e38f40d628db About a minute ago Running storage-provisioner 0 3c840dfa16845
ef3565ebed501 5cd54e388abaf About a minute ago Running kube-proxy 0 facebb8dc382e
15294a1b99e50 00638a24688b0 About a minute ago Running kube-scheduler 0 9113a9c371341
76559266dc96c b95b1efa0436b About a minute ago Running kube-controller-manager 0 5c8b321c5839a
557ff658123d4 2c4adeb21b4ff About a minute ago Running etcd 0 4d98c28eb4819
7435c96f89723 ecf910f40d6e0 About a minute ago Running kube-apiserver 0 553d498b0da82
*
* ==> coredns [80656451acc2] <==
* .:53
2021-06-09T01:40:37.071Z [INFO] CoreDNS-1.3.1
2021-06-09T01:40:37.071Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2021-06-09T01:40:37.071Z [INFO] plugin/reload: Running configuration MD5 = d7336ec3b7f1205cfa0fef85b62c291b
*
* ==> describe nodes <==
* Name: old-k8s-version-20210609012901-9941
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=old-k8s-version-20210609012901-9941
kubernetes.io/os=linux
minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc
minikube.k8s.io/name=old-k8s-version-20210609012901-9941
minikube.k8s.io/updated_at=2021_06_09T01_40_17_0700
minikube.k8s.io/version=v1.21.0-beta.0
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 09 Jun 2021 01:40:13 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 09 Jun 2021 01:41:13 +0000 Wed, 09 Jun 2021 01:40:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 09 Jun 2021 01:41:13 +0000 Wed, 09 Jun 2021 01:40:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 09 Jun 2021 01:41:13 +0000 Wed, 09 Jun 2021 01:40:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 09 Jun 2021 01:41:13 +0000 Wed, 09 Jun 2021 01:40:08 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.67.2
Hostname: old-k8s-version-20210609012901-9941
Capacity:
cpu: 8
ephemeral-storage: 309568300Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32951376Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 309568300Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32951376Ki
pods: 110
System Info:
Machine ID: b77ec962e3734760b1e756ffc5e83152
System UUID: fcb82c90-e30d-41cf-83d7-0b244092491c
Boot ID: e08f76ce-1642-432a-8e61-95aaa19183a7
Kernel Version: 4.9.0-15-amd64
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.14.0
Kube-Proxy Version: v1.14.0
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (10 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-fb8b8dccf-ctgrx 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 93s
kube-system etcd-old-k8s-version-20210609012901-9941 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 32s
kube-system kube-apiserver-old-k8s-version-20210609012901-9941 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 48s
kube-system kube-controller-manager-old-k8s-version-20210609012901-9941 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 47s
kube-system kube-proxy-97rr9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 93s
kube-system kube-scheduler-old-k8s-version-20210609012901-9941 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 50s
kube-system metrics-server-8546d8b77b-lqx7b 100m (1%!)(MISSING) 0 (0%!)(MISSING) 300Mi (0%!)(MISSING) 0 (0%!)(MISSING) 89s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 91s
kubernetes-dashboard dashboard-metrics-scraper-5b494cc544-529qb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 90s
kubernetes-dashboard kubernetes-dashboard-5d8978d65d-5c7t7 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 90s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 370Mi (1%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 118s kubelet, old-k8s-version-20210609012901-9941 Starting kubelet.
Normal NodeHasSufficientMemory 118s (x8 over 118s) kubelet, old-k8s-version-20210609012901-9941 Node old-k8s-version-20210609012901-9941 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 118s (x8 over 118s) kubelet, old-k8s-version-20210609012901-9941 Node old-k8s-version-20210609012901-9941 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 118s (x7 over 118s) kubelet, old-k8s-version-20210609012901-9941 Node old-k8s-version-20210609012901-9941 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 118s kubelet, old-k8s-version-20210609012901-9941 Updated Node Allocatable limit across pods
Normal Starting 90s kube-proxy, old-k8s-version-20210609012901-9941 Starting kube-proxy.
*
* ==> dmesg <==
* [ +1.658653] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff ce 5c c6 1f 63 8a 08 06 .......\..c...
[ +0.004022] IPv4: martian source 10.85.0.5 from 10.85.0.5, on dev eth0
[ +0.000001] ll header: 00000000: ff ff ff ff ff ff 0e 5d 4b c1 e0 ed 08 06 .......]K.....
[ +2.140856] IPv4: martian source 10.85.0.6 from 10.85.0.6, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 3e a3 2b db cb b6 08 06 ......>.+.....
[ +0.147751] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 9a f2 40 59 da 87 08 06 ........@Y....
[ +2.083360] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0
[ +0.000001] ll header: 00000000: ff ff ff ff ff ff 56 9d 71 18 33 dd 08 06 ......V.q.3...
[ +0.000616] IPv4: martian source 10.85.0.9 from 10.85.0.9, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 8e 8d b3 62 b0 07 08 06 .........b....
[ +1.714381] IPv4: martian source 10.85.0.10 from 10.85.0.10, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e d1 b5 da bf 05 08 06 ..............
[ +0.003822] IPv4: martian source 10.85.0.11 from 10.85.0.11, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 92 3a 5c 13 9f 7c 08 06 .......:\..|..
[ +0.920701] IPv4: martian source 10.85.0.12 from 10.85.0.12, on dev eth0
[ +0.000003] ll header: 00000000: ff ff ff ff ff ff d2 50 1c d3 1f 17 08 06 .......P......
[ +0.002962] IPv4: martian source 10.85.0.13 from 10.85.0.13, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 86 09 69 5a 94 d2 08 06 ........iZ....
[ +0.999987] IPv4: martian source 10.85.0.14 from 10.85.0.14, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff fa 88 03 51 34 f3 08 06 .........Q4...
[ +0.004235] IPv4: martian source 10.85.0.15 from 10.85.0.15, on dev eth0
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 25 39 34 91 f2 08 06 .......%!.(MISSING)..
[ +6.380947] cgroup: cgroup2: unknown option "nsdelegate"
*
* ==> etcd [557ff658123d] <==
* 2021-06-09 01:40:48.647414 W | wal: sync duration of 1.103904697s, expected less than 1s
2021-06-09 01:40:48.753091 W | etcdserver: request "header:<ID:2289933000483394557 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" mod_revision:364 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" value_size:1214 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" > >>" with result "size:16" took too long (105.414042ms) to execute
2021-06-09 01:40:48.753496 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (250.229741ms) to execute
2021-06-09 01:40:48.753722 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-fb8b8dccf-ctgrx\" " with result "range_response_count:1 size:1770" took too long (891.632545ms) to execute
2021-06-09 01:40:50.467937 W | etcdserver: request "header:<ID:2289933000483394562 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:537 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:677 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (1.08693209s) to execute
2021-06-09 01:40:50.468037 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.566131533s) to execute
2021-06-09 01:40:50.468071 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210609012901-9941\" " with result "range_response_count:1 size:3347" took too long (1.710868913s) to execute
2021-06-09 01:40:50.468206 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-529qb.1686c662e29f9611\" " with result "range_response_count:1 size:597" took too long (928.182072ms) to execute
2021-06-09 01:40:51.483862 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-97rr9\" " with result "range_response_count:1 size:2147" took too long (1.013095215s) to execute
2021-06-09 01:41:12.976673 W | wal: sync duration of 1.117225227s, expected less than 1s
2021-06-09 01:41:13.114230 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3347" took too long (314.968585ms) to execute
2021-06-09 01:41:13.114284 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-lqx7b.1686c6626a0d515c\" " with result "range_response_count:1 size:550" took too long (1.100437486s) to execute
2021-06-09 01:41:13.114371 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:4 size:7785" took too long (687.507808ms) to execute
2021-06-09 01:41:13.114518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-8546d8b77b-lqx7b\" " with result "range_response_count:1 size:1851" took too long (1.101558003s) to execute
2021-06-09 01:41:13.114553 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (579.387664ms) to execute
2021-06-09 01:41:13.722674 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-lqx7b.1686c6626a0d9249\" " with result "range_response_count:1 size:511" took too long (603.050028ms) to execute
2021-06-09 01:41:13.722784 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210609012901-9941\" " with result "range_response_count:1 size:395" took too long (601.855298ms) to execute
2021-06-09 01:41:13.723059 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-node-lease\" " with result "range_response_count:1 size:187" took too long (573.108462ms) to execute
2021-06-09 01:41:15.464247 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (1.450534843s) to execute
2021-06-09 01:41:15.464304 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (166.55648ms) to execute
2021-06-09 01:41:15.464595 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (144.856126ms) to execute
2021-06-09 01:41:15.465036 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (1.527858302s) to execute
2021-06-09 01:41:15.465734 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (313.803884ms) to execute
2021-06-09 01:41:37.088502 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (579.483729ms) to execute
2021-06-09 01:41:57.525183 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (146.394885ms) to execute
*
* ==> kernel <==
* 01:42:05 up 1:24, 0 users, load average: 4.91, 3.39, 2.63
Linux old-k8s-version-20210609012901-9941 4.9.0-15-amd64 #1 SMP Debian 4.9.258-1 (2021-03-08) x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [7435c96f8972] <==
* I0609 01:41:53.476131 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:54.476295 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:54.476431 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:55.476606 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:55.476735 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:56.476937 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:56.477102 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:57.477291 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:57.477429 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:58.477563 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:58.477715 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:41:59.477874 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:41:59.478011 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:42:00.478169 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:42:00.478301 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:42:01.478453 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:42:01.478583 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:42:02.478748 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:42:02.478888 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:42:03.479048 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:42:03.479199 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:42:04.479372 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:42:04.479523 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0609 01:42:05.479686 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0609 01:42:05.479844 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
*
* ==> kube-controller-manager [76559266dc96] <==
* I0609 01:40:35.350957 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.355715 1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.359115 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"af7ffe92-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5d8978d65d to 1
E0609 01:40:35.361941 1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.362185 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.363976 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.365457 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.365465 1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.367928 1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.372059 1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.372481 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.441817 1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.441964 1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.442412 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.442440 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0609 01:40:35.464444 1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.464486 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0609 01:40:35.546527 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-529qb
I0609 01:40:35.546799 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-5c7t7
I0609 01:40:36.049812 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"af420efe-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-lqx7b
E0609 01:41:02.997582 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0609 01:41:05.550860 1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0609 01:41:33.249304 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0609 01:41:37.552663 1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0609 01:42:03.500854 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
*
* ==> kube-proxy [ef3565ebed50] <==
* W0609 01:40:33.954499 1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I0609 01:40:33.964131 1 server_others.go:148] Using iptables Proxier.
I0609 01:40:33.964802 1 server_others.go:178] Tearing down inactive rules.
E0609 01:40:34.154995 1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
I0609 01:40:35.290112 1 server.go:555] Version: v1.14.0
I0609 01:40:35.341044 1 config.go:202] Starting service config controller
I0609 01:40:35.341164 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0609 01:40:35.341748 1 config.go:102] Starting endpoints config controller
I0609 01:40:35.343249 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0609 01:40:35.441725 1 controller_utils.go:1034] Caches are synced for service config controller
I0609 01:40:35.443748 1 controller_utils.go:1034] Caches are synced for endpoints config controller
*
* ==> kube-scheduler [15294a1b99e5] <==
* W0609 01:40:10.688361 1 authentication.go:55] Authentication is disabled
I0609 01:40:10.688374 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0609 01:40:10.688743 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0609 01:40:12.981814 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0609 01:40:12.981916 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0609 01:40:12.982827 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0609 01:40:13.050964 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0609 01:40:13.062003 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0609 01:40:13.062138 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0609 01:40:13.062510 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0609 01:40:13.062930 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0609 01:40:13.064487 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0609 01:40:13.065331 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0609 01:40:13.982943 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0609 01:40:13.984017 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0609 01:40:13.985045 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0609 01:40:14.052710 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0609 01:40:14.063171 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0609 01:40:14.063859 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0609 01:40:14.065063 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0609 01:40:14.066262 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0609 01:40:14.067278 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0609 01:40:14.068396 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
I0609 01:40:15.890053 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0609 01:40:15.990228 1 controller_utils.go:1034] Caches are synced for scheduler controller
*
* ==> kubelet <==
* -- Logs begin at Wed 2021-06-09 01:34:39 UTC, end at Wed 2021-06-09 01:42:06 UTC. --
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434450 6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434528 6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434593 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.702071 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
Jun 09 01:40:43 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:43.724887 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:40:44 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:44.734847 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:40:49 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:49.538510 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042394 6233 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042449 6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042530 6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042566 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:01 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:01.836699 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:09 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:09.538606 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:12 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:12.012609 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
Jun 09 01:41:21 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:21.011631 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.040969 6233 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041003 6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041051 6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041074 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
Jun 09 01:41:35 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:35.034469 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:39 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:39.538621 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:40 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:40.012660 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
Jun 09 01:41:52 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:52.011734 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
Jun 09 01:41:53 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:53.012733 6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
Jun 09 01:42:05 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:42:05.011713 6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
*
* ==> kubernetes-dashboard [9230420d066a] <==
* 2021/06/09 01:40:37 Using namespace: kubernetes-dashboard
2021/06/09 01:40:37 Using in-cluster config to connect to apiserver
2021/06/09 01:40:37 Using secret token for csrf signing
2021/06/09 01:40:37 Initializing csrf token from kubernetes-dashboard-csrf secret
2021/06/09 01:40:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2021/06/09 01:40:37 Successful initial request to the apiserver, version: v1.14.0
2021/06/09 01:40:37 Generating JWE encryption key
2021/06/09 01:40:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2021/06/09 01:40:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2021/06/09 01:40:37 Initializing JWE encryption key from synchronized object
2021/06/09 01:40:37 Creating in-cluster Sidecar client
2021/06/09 01:40:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2021/06/09 01:40:37 Serving insecurely on HTTP port: 9090
2021/06/09 01:41:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2021/06/09 01:41:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2021/06/09 01:40:37 Starting overwatch
*
* ==> storage-provisioner [d27ec4783cae] <==
* I0609 01:40:36.443365 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0609 01:40:36.452888 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0609 01:40:36.452950 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0609 01:40:36.459951 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0609 01:40:36.460148 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d!
I0609 01:40:36.461060 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af273732-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d became leader
I0609 01:40:36.560264 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d!
-- /stdout --
helpers_test.go:250: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210609012901-9941 -n old-k8s-version-20210609012901-9941
helpers_test.go:257: (dbg) Run: kubectl --context old-k8s-version-20210609012901-9941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: metrics-server-8546d8b77b-lqx7b
helpers_test.go:265: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:268: (dbg) Run: kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b: exit status 1 (63.082266ms)
** stderr **
Error from server (NotFound): pods "metrics-server-8546d8b77b-lqx7b" not found
** /stderr **
helpers_test.go:270: kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.55s)