=== RUN TestPause/serial/Pause
pause_test.go:108: (dbg) Run: out/minikube-linux-amd64 pause -p pause-20211203024124-532170 --alsologtostderr -v=5
pause_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20211203024124-532170 --alsologtostderr -v=5: exit status 80 (4.683294064s)
-- stdout --
* Pausing node pause-20211203024124-532170 ...
-- /stdout --
** stderr **
I1203 02:42:52.260401 668281 out.go:297] Setting OutFile to fd 1 ...
I1203 02:42:52.260558 668281 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1203 02:42:52.260567 668281 out.go:310] Setting ErrFile to fd 2...
I1203 02:42:52.260573 668281 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1203 02:42:52.260739 668281 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/bin
I1203 02:42:52.260965 668281 out.go:304] Setting JSON to false
I1203 02:42:52.260997 668281 mustload.go:65] Loading cluster: pause-20211203024124-532170
I1203 02:42:52.261445 668281 config.go:176] Loaded profile config "pause-20211203024124-532170": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.4
I1203 02:42:52.261994 668281 cli_runner.go:115] Run: docker container inspect pause-20211203024124-532170 --format={{.State.Status}}
I1203 02:42:52.350978 668281 host.go:66] Checking if "pause-20211203024124-532170" exists ...
I1203 02:42:52.351362 668281 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1203 02:42:52.457734 668281 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:222 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:91 SystemTime:2021-12-03 02:42:52.405574498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1203 02:42:52.642164 668281 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bo
ol=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/13059/minikube-v1.24.0-1638385553-13059.iso https://github.com/kubernetes/minikube/releases/download/v1.24.0-1638385553-13059/minikube-v1.24.0-1638385553-13059.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.24.0-1638385553-13059.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-
plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20211203024124-532170 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
I1203 02:42:54.291539 668281 out.go:176] * Pausing node pause-20211203024124-532170 ...
I1203 02:42:54.291585 668281 host.go:66] Checking if "pause-20211203024124-532170" exists ...
I1203 02:42:54.291965 668281 ssh_runner.go:195] Run: systemctl --version
I1203 02:42:54.292031 668281 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211203024124-532170
I1203 02:42:54.334110 668281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33286 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/machines/pause-20211203024124-532170/id_rsa Username:docker}
I1203 02:42:54.413080 668281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1203 02:42:54.422926 668281 pause.go:50] kubelet running: true
I1203 02:42:54.422990 668281 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
I1203 02:42:54.915998 668281 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
I1203 02:42:54.916108 668281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
I1203 02:42:55.001665 668281 cri.go:76] found id: "44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8"
I1203 02:42:55.001730 668281 cri.go:76] found id: "fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf"
I1203 02:42:55.001739 668281 cri.go:76] found id: "9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a"
I1203 02:42:55.001745 668281 cri.go:76] found id: "b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf"
I1203 02:42:55.001751 668281 cri.go:76] found id: "da77c4c16ea386dda39fcf91ff000cf63eb9f9b8cef0af731ccbbe2fe9f197dd"
I1203 02:42:55.001759 668281 cri.go:76] found id: "377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac"
I1203 02:42:55.001766 668281 cri.go:76] found id: "c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83"
I1203 02:42:55.001773 668281 cri.go:76] found id: "7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4"
I1203 02:42:55.001781 668281 cri.go:76] found id: ""
I1203 02:42:55.001850 668281 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1203 02:42:55.045187 668281 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d","pid":1680,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d/rootfs","created":"2021-12-03T02:42:20.264783354Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-5xmj2_9b4f2238-af1f-487f-80ff-a0a932437cee"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a438759f3a6e291ca338642bc87e885a486d7d5
37309aa240f93ac2fdfa449c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c/rootfs","created":"2021-12-03T02:41:59.468855551Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20211203024124-532170_d38c5cafc0eadb77f3c31b03ddf6e989"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62","pid":1005,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62/rootfs","created":"2021-12-03T02:41:59.568722379Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","i
o.kubernetes.cri.sandbox-id":"26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20211203024124-532170_1d840b763857e30c34b6698e9fa7570f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac","pid":1143,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac/rootfs","created":"2021-12-03T02:41:59.92570495Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8","pid":2587,"s
tatus":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8/rootfs","created":"2021-12-03T02:42:51.560749945Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2","pid":1014,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2/rootfs","created":"2021-12-03T02:41:59.568796096Z","annotations":{"io.kubernetes
.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20211203024124-532170_2b61b38a825a0505d16b2d6a2a9cedfb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2","pid":1687,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2/rootfs","created":"2021-12-03T02:42:20.596836103Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-n6hkm_368971a0-2c0b-44f2-b66d-3257c9f080f4"},"owner":"root"},
{"ociVersion":"1.0.2-dev","id":"75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195","pid":998,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195/rootfs","created":"2021-12-03T02:41:59.568737058Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20211203024124-532170_d9c5bb1dd4c0e3a00c6f04d2449fcce4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4","pid":1077,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4","r
ootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4/rootfs","created":"2021-12-03T02:41:59.766374576Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da","pid":1987,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da/rootfs","created":"2021-12-03T02:42:33.364777884Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da","io.kubernetes.cri.sandb
ox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-qwmpd_7f97c64e-1e68-4356-a1b5-f77f7cc81b24"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f","pid":2556,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f/rootfs","created":"2021-12-03T02:42:51.32106819Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_9aa5302f-010c-4351-96c2-b2485180be47"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a","pid":1867,"status":"running","bundle":"/run/containerd/io.conta
inerd.runtime.v2.task/k8s.io/9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a/rootfs","created":"2021-12-03T02:42:21.456633358Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf","pid":1718,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf/rootfs","created":"2021-12-03T02:42:20.536651488Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.contai
ner-type":"container","io.kubernetes.cri.sandbox-id":"1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83","pid":1136,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83/rootfs","created":"2021-12-03T02:41:59.868290191Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da77c4c16ea386dda39fcf91ff000cf63eb9f9b8cef0af731ccbbe2fe9f197dd","pid":1152,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da77c4c16ea386dda39fcf91ff000cf63eb9f9b8c
ef0af731ccbbe2fe9f197dd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da77c4c16ea386dda39fcf91ff000cf63eb9f9b8cef0af731ccbbe2fe9f197dd/rootfs","created":"2021-12-03T02:41:59.928696433Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf","pid":2019,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf/rootfs","created":"2021-12-03T02:42:33.648769478Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"989b0a2
0cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da"},"owner":"root"}]
I1203 02:42:55.045428 668281 cri.go:113] list returned 16 containers
I1203 02:42:55.045443 668281 cri.go:116] container: {ID:1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d Status:running}
I1203 02:42:55.045477 668281 cri.go:118] skipping 1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d - not in ps
I1203 02:42:55.045485 668281 cri.go:116] container: {ID:1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c Status:running}
I1203 02:42:55.045495 668281 cri.go:118] skipping 1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c - not in ps
I1203 02:42:55.045500 668281 cri.go:116] container: {ID:26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62 Status:running}
I1203 02:42:55.045507 668281 cri.go:118] skipping 26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62 - not in ps
I1203 02:42:55.045514 668281 cri.go:116] container: {ID:377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac Status:running}
I1203 02:42:55.045521 668281 cri.go:116] container: {ID:44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8 Status:running}
I1203 02:42:55.045530 668281 cri.go:116] container: {ID:4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2 Status:running}
I1203 02:42:55.045539 668281 cri.go:118] skipping 4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2 - not in ps
I1203 02:42:55.045552 668281 cri.go:116] container: {ID:52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2 Status:running}
I1203 02:42:55.045561 668281 cri.go:118] skipping 52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2 - not in ps
I1203 02:42:55.045566 668281 cri.go:116] container: {ID:75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195 Status:running}
I1203 02:42:55.045573 668281 cri.go:118] skipping 75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195 - not in ps
I1203 02:42:55.045580 668281 cri.go:116] container: {ID:7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4 Status:running}
I1203 02:42:55.045587 668281 cri.go:116] container: {ID:989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da Status:running}
I1203 02:42:55.045595 668281 cri.go:118] skipping 989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da - not in ps
I1203 02:42:55.045600 668281 cri.go:116] container: {ID:9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f Status:running}
I1203 02:42:55.045614 668281 cri.go:118] skipping 9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f - not in ps
I1203 02:42:55.045623 668281 cri.go:116] container: {ID:9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a Status:running}
I1203 02:42:55.045633 668281 cri.go:116] container: {ID:b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf Status:running}
I1203 02:42:55.045640 668281 cri.go:116] container: {ID:c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83 Status:running}
I1203 02:42:55.045648 668281 cri.go:116] container: {ID:da77c4c16ea386dda39fcf91ff000cf63eb9f9b8cef0af731ccbbe2fe9f197dd Status:running}
I1203 02:42:55.045654 668281 cri.go:116] container: {ID:fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf Status:running}
I1203 02:42:55.045700 668281 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io pause 377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac
I1203 02:42:55.062697 668281 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io pause 44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8
I1203 02:42:55.080693 668281 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io pause 7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4
I1203 02:42:55.099581 668281 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io pause 9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a
I1203 02:42:55.116571 668281 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io pause b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf
I1203 02:42:55.133319 668281 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io pause c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83
I1203 02:42:55.391880 668281 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83: Process exited with status 1
stdout:
stderr:
time="2021-12-03T02:42:55Z" level=error msg="unable to freeze"
I1203 02:42:55.668280 668281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1203 02:42:55.679598 668281 pause.go:50] kubelet running: false
I1203 02:42:55.679672 668281 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
I1203 02:42:55.779952 668281 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
I1203 02:42:55.780052 668281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
I1203 02:42:55.867292 668281 cri.go:76] found id: "44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8"
I1203 02:42:55.867322 668281 cri.go:76] found id: "fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf"
I1203 02:42:55.867331 668281 cri.go:76] found id: "9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a"
I1203 02:42:55.867339 668281 cri.go:76] found id: "b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf"
I1203 02:42:55.867346 668281 cri.go:76] found id: "da77c4c16ea386dda39fcf91ff000cf63eb9f9b8cef0af731ccbbe2fe9f197dd"
I1203 02:42:55.867354 668281 cri.go:76] found id: "377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac"
I1203 02:42:55.867361 668281 cri.go:76] found id: "c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83"
I1203 02:42:55.867368 668281 cri.go:76] found id: "7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4"
I1203 02:42:55.867375 668281 cri.go:76] found id: ""
I1203 02:42:55.867417 668281 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1203 02:42:55.906466 668281 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d","pid":1680,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d/rootfs","created":"2021-12-03T02:42:20.264783354Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-5xmj2_9b4f2238-af1f-487f-80ff-a0a932437cee"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c","pid":991,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a438759f3a6e291ca338642bc87e885a486d7d5
37309aa240f93ac2fdfa449c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c/rootfs","created":"2021-12-03T02:41:59.468855551Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20211203024124-532170_d38c5cafc0eadb77f3c31b03ddf6e989"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62","pid":1005,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62/rootfs","created":"2021-12-03T02:41:59.568722379Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","i
o.kubernetes.cri.sandbox-id":"26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20211203024124-532170_1d840b763857e30c34b6698e9fa7570f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac","pid":1143,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac/rootfs","created":"2021-12-03T02:41:59.92570495Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8","pid":2587,"st
atus":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8/rootfs","created":"2021-12-03T02:42:51.560749945Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2","pid":1014,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2/rootfs","created":"2021-12-03T02:41:59.568796096Z","annotations":{"io.kubernetes.c
ri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20211203024124-532170_2b61b38a825a0505d16b2d6a2a9cedfb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2","pid":1687,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2/rootfs","created":"2021-12-03T02:42:20.596836103Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-n6hkm_368971a0-2c0b-44f2-b66d-3257c9f080f4"},"owner":"root"},{"
ociVersion":"1.0.2-dev","id":"75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195","pid":998,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195/rootfs","created":"2021-12-03T02:41:59.568737058Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20211203024124-532170_d9c5bb1dd4c0e3a00c6f04d2449fcce4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4","pid":1077,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4","root
fs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4/rootfs","created":"2021-12-03T02:41:59.766374576Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da","pid":1987,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da/rootfs","created":"2021-12-03T02:42:33.364777884Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da","io.kubernetes.cri.sandbox-
log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-qwmpd_7f97c64e-1e68-4356-a1b5-f77f7cc81b24"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f","pid":2556,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f/rootfs","created":"2021-12-03T02:42:51.32106819Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_9aa5302f-010c-4351-96c2-b2485180be47"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a","pid":1867,"status":"paused","bundle":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a/rootfs","created":"2021-12-03T02:42:21.456633358Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf","pid":1718,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf/rootfs","created":"2021-12-03T02:42:20.536651488Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-t
ype":"container","io.kubernetes.cri.sandbox-id":"1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83","pid":1136,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83/rootfs","created":"2021-12-03T02:41:59.868290191Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"da77c4c16ea386dda39fcf91ff000cf63eb9f9b8cef0af731ccbbe2fe9f197dd","pid":1152,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da77c4c16ea386dda39fcf91ff000cf63eb9f9b8cef0af
731ccbbe2fe9f197dd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da77c4c16ea386dda39fcf91ff000cf63eb9f9b8cef0af731ccbbe2fe9f197dd/rootfs","created":"2021-12-03T02:41:59.928696433Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf","pid":2019,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf/rootfs","created":"2021-12-03T02:42:33.648769478Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"989b0a20cd6b
f68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da"},"owner":"root"}]
I1203 02:42:55.906736 668281 cri.go:113] list returned 16 containers
I1203 02:42:55.906754 668281 cri.go:116] container: {ID:1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d Status:running}
I1203 02:42:55.906767 668281 cri.go:118] skipping 1150dc7540b90bd2c4a50ddd37e291d8b5b53ebc8f30861bd69eabdb8cb5c03d - not in ps
I1203 02:42:55.906773 668281 cri.go:116] container: {ID:1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c Status:running}
I1203 02:42:55.906782 668281 cri.go:118] skipping 1a438759f3a6e291ca338642bc87e885a486d7d537309aa240f93ac2fdfa449c - not in ps
I1203 02:42:55.906787 668281 cri.go:116] container: {ID:26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62 Status:running}
I1203 02:42:55.906796 668281 cri.go:118] skipping 26442a09720c68bb0cf3763c7b29d02acba5cb4a86bfdfd5837c291c62e15c62 - not in ps
I1203 02:42:55.906801 668281 cri.go:116] container: {ID:377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac Status:paused}
I1203 02:42:55.906815 668281 cri.go:122] skipping {377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac paused}: state = "paused", want "running"
I1203 02:42:55.906831 668281 cri.go:116] container: {ID:44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8 Status:paused}
I1203 02:42:55.906838 668281 cri.go:122] skipping {44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8 paused}: state = "paused", want "running"
I1203 02:42:55.906847 668281 cri.go:116] container: {ID:4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2 Status:running}
I1203 02:42:55.906853 668281 cri.go:118] skipping 4b59fc65934a4d1db5162370a8a418086d777ef9f626bdb0ec95e43f8caebae2 - not in ps
I1203 02:42:55.906858 668281 cri.go:116] container: {ID:52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2 Status:running}
I1203 02:42:55.906867 668281 cri.go:118] skipping 52fd468fab925b0ce3f6717273564e197b13ac473a454f1a6b89ab38e9e832b2 - not in ps
I1203 02:42:55.906872 668281 cri.go:116] container: {ID:75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195 Status:running}
I1203 02:42:55.906882 668281 cri.go:118] skipping 75715f0308eb13feb3e61ce039736ca12c9e5290f9133c2d31b2440b709e4195 - not in ps
I1203 02:42:55.906887 668281 cri.go:116] container: {ID:7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4 Status:paused}
I1203 02:42:55.906896 668281 cri.go:122] skipping {7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4 paused}: state = "paused", want "running"
I1203 02:42:55.906902 668281 cri.go:116] container: {ID:989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da Status:running}
I1203 02:42:55.906912 668281 cri.go:118] skipping 989b0a20cd6bf68f31e3f0e13ec1f38a08139a4143c3476766ea123fb4d7c6da - not in ps
I1203 02:42:55.906917 668281 cri.go:116] container: {ID:9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f Status:running}
I1203 02:42:55.906926 668281 cri.go:118] skipping 9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f - not in ps
I1203 02:42:55.906931 668281 cri.go:116] container: {ID:9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a Status:paused}
I1203 02:42:55.906937 668281 cri.go:122] skipping {9fe05a46928d77033acd027e53fc3377963639ad7b808f26355742d594065e8a paused}: state = "paused", want "running"
I1203 02:42:55.906946 668281 cri.go:116] container: {ID:b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf Status:paused}
I1203 02:42:55.906952 668281 cri.go:122] skipping {b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf paused}: state = "paused", want "running"
I1203 02:42:55.906961 668281 cri.go:116] container: {ID:c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83 Status:running}
I1203 02:42:55.906967 668281 cri.go:116] container: {ID:da77c4c16ea386dda39fcf91ff000cf63eb9f9b8cef0af731ccbbe2fe9f197dd Status:running}
I1203 02:42:55.906976 668281 cri.go:116] container: {ID:fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf Status:running}
I1203 02:42:55.907025 668281 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io pause c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83
I1203 02:42:56.800699 668281 out.go:176]
W1203 02:42:56.800957 668281 out.go:241] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83: Process exited with status 1
stdout:
stderr:
time="2021-12-03T02:42:56Z" level=error msg="unable to freeze"
X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83: Process exited with status 1
stdout:
stderr:
time="2021-12-03T02:42:56Z" level=error msg="unable to freeze"
W1203 02:42:56.800978 668281 out.go:241] *
*
W1203 02:42:56.816094 668281 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1203 02:42:56.820126 668281 out.go:176]
** /stderr **
pause_test.go:110: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20211203024124-532170 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======> post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run: docker inspect pause-20211203024124-532170
helpers_test.go:236: (dbg) docker inspect pause-20211203024124-532170:
-- stdout --
[
{
"Id": "2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc",
"Created": "2021-12-03T02:41:26.275196666Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 651787,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-12-03T02:41:27.094995507Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:e2a6c047beddf8261495222adf87089305bbc18e350587b01ebe3725535b5871",
"ResolvConfPath": "/var/lib/docker/containers/2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc/hostname",
"HostsPath": "/var/lib/docker/containers/2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc/hosts",
"LogPath": "/var/lib/docker/containers/2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc/2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc-json.log",
"Name": "/pause-20211203024124-532170",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-20211203024124-532170:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-20211203024124-532170",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [
{
"PathOnHost": "/dev/fuse",
"PathInContainer": "/dev/fuse",
"CgroupPermissions": "rwm"
}
],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/7b5a1f59d5810bced6fa94f13fccc8233a0a812da9bb925602d8108fc5966bbf-init/diff:/var/lib/docker/overlay2/6ac4080541cfeacde0b54b862ed3e8c1497bf3d3f22b73f3dd4ce1f668e19c72/diff:/var/lib/docker/overlay2/3e23ff251c4a4f7c2bc4e503098b4a2c6cc61a1d99b51a599fab585fe722bb1b/diff:/var/lib/docker/overlay2/cc6e93735a0ca1de72f287a78429adcbf52754f7b56e82fd10136b36df126cb7/diff:/var/lib/docker/overlay2/9b91525d2aa18b23a08318fd216cf0dd395530e8ee0971ef4ee837687f222c44/diff:/var/lib/docker/overlay2/afd7645dd1e3de615d5e6b8e38717640c35dd9d0dbad5d79e1e72c29f519e047/diff:/var/lib/docker/overlay2/e42185b4ffd5fd72d5dd3032240c65ebde359f8e05c73af3d0187f32c837a786/diff:/var/lib/docker/overlay2/fc00a8e7d591e187ad76a50b07d8c5a06e15c107d4f840ccea15b68e28da1980/diff:/var/lib/docker/overlay2/a29bf1481ece426838b6a498b3effdf9ff07fce93c547917247a80b257a1fe6a/diff:/var/lib/docker/overlay2/d08861a8de693171af3100608e63638de0eff4ee0ab9dad23a141b63c9fc78ba/diff:/var/lib/docker/overlay2/a810fd
33b8bdb6290817982a645db1f3f53d332f221af803394694d926de6ca1/diff:/var/lib/docker/overlay2/82d5a8bdd562d849d98ab607f16a3a733ccc7b8f078412597b1324d4fadd3be1/diff:/var/lib/docker/overlay2/1a93d862337e75ae7f4350d3b11066c9de4c24aceee76546de610057f617281d/diff:/var/lib/docker/overlay2/016cb2f9bec981cb59ea9c574c581bad8e88da405748168504b409ab365eb79b/diff:/var/lib/docker/overlay2/83e41d3d061b2ae009fb28d0223691d1ec84632e4b835f49eec164d4466dae2d/diff:/var/lib/docker/overlay2/0ce62a53786be1b733f4d9a3f0bb5d6eddb1df23134f5b87286ea0a0cf9fbbf0/diff:/var/lib/docker/overlay2/c13141fe83f4579fcc3d40c499982793faebda827f5ca6cc5534911b1ec393f6/diff:/var/lib/docker/overlay2/55fce7735b2305346d30a554960b07ae732bacdfbc399692dd2506c80daf9877/diff:/var/lib/docker/overlay2/252a160a3b32162d3ff13c40ccc33cabd273340235797ebee736493a1029eda7/diff:/var/lib/docker/overlay2/fce6704ef3f6cc37735e8972372e0f5e0bfbf4af83f5ff6fd027e174c59578e0/diff:/var/lib/docker/overlay2/88c75fcb26b5194e243c651225b00fb638c5ea62eb979d7df5b728df5ef5195a/diff:/var/lib/d
ocker/overlay2/0771e6e29be058dc4b8a5521020f5d66e3e4f0996747baea7cb18371e3545b6b/diff:/var/lib/docker/overlay2/9b15999e93e34bb79fccce68ef9de2dfe1780f1d4c0595f52d15981d7090babb/diff:/var/lib/docker/overlay2/8fd40ec7570ab0690e15c09834e3bc5284b3badf0f5b98cf234cc063d023fbce/diff:/var/lib/docker/overlay2/317d010d230847ac38e52570cb1bf66b55a735ba2fcd26045ca1d928fee269e9/diff:/var/lib/docker/overlay2/9e0744f6558a5e30811c7bc770686bb03018000ec5504f0a7342d648858c6520/diff:/var/lib/docker/overlay2/2b1e4978e05c0bbd41e0db471a4546b47e88d4d07bba062e641ee19252fbe324/diff:/var/lib/docker/overlay2/8a5ad66f897aa93b3a59cb81de4d6f3a4437cdf29b1daaac0470479e0627d353/diff:/var/lib/docker/overlay2/4c73a8fc02a1854c714f156081c2ed75e595e3dde9226820f21ff9353b52fbb7/diff:/var/lib/docker/overlay2/be02751327b17c0e71631c5e6cbea324ca807f35ff112d906955328aabd427d8/diff:/var/lib/docker/overlay2/500480e16cba6dcc737fbaabcd657d62a6d877b3505eec63f1de6e5e3c9dbb92/diff:/var/lib/docker/overlay2/2cd99ae5cfb49d19c25b95ac17d84a6889e691672d6d93a9f066ca8b7c4
5289f/diff:/var/lib/docker/overlay2/bc1787830a034113efe9235ddf0dc8652dfff6e7926a63d839120bbe9ebc0b99/diff:/var/lib/docker/overlay2/44149fe8e3297368fee684058e3d52cef2712454b3145aa883bcb63bbae8542f/diff:/var/lib/docker/overlay2/433b47ea5ada41625d6479daf7944dc706ab5abbcad92f65a32b40a754cbf645/diff:/var/lib/docker/overlay2/77082cd9d165ff33aaddc632231ac2ea8dc30f9073037fc9c3c97f50db12d5b2/diff:/var/lib/docker/overlay2/631d81e1d8b807f0a941170215bfd43566c2823909251d69246f84dc74dff425/diff:/var/lib/docker/overlay2/6eb234f85614a81f5323a7f51a68384027e94afd402c9851374d990371ebf594/diff:/var/lib/docker/overlay2/8c188ae964eaa37da1b63d82bc30f6d81e8a06ee2c1af3f12719f9cf443b6e09/diff:/var/lib/docker/overlay2/86d4eaadf1b0680172d06e3118a53a77440af3c8e5bde8c29bb97e7f94599244/diff:/var/lib/docker/overlay2/78fadeb69e3f7825d1bad64e2e686eff962fc4ed859de8fd0d2b2d30d56510a1/diff",
"MergedDir": "/var/lib/docker/overlay2/7b5a1f59d5810bced6fa94f13fccc8233a0a812da9bb925602d8108fc5966bbf/merged",
"UpperDir": "/var/lib/docker/overlay2/7b5a1f59d5810bced6fa94f13fccc8233a0a812da9bb925602d8108fc5966bbf/diff",
"WorkDir": "/var/lib/docker/overlay2/7b5a1f59d5810bced6fa94f13fccc8233a0a812da9bb925602d8108fc5966bbf/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "pause-20211203024124-532170",
"Source": "/var/lib/docker/volumes/pause-20211203024124-532170/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "pause-20211203024124-532170",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-20211203024124-532170",
"name.minikube.sigs.k8s.io": "pause-20211203024124-532170",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "c1242b1b033f16bedee595a0280553c95576e94729234572b5dbfcbd58cee8ec",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33286"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33285"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33282"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33284"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33283"
}
]
},
"SandboxKey": "/var/run/docker/netns/c1242b1b033f",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-20211203024124-532170": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"2fc356be3f9a"
],
"NetworkID": "da1d0795a7d3bb5f007c696fba96d61b49ce95f6e99122ac37e1226dd45b8f38",
"EndpointID": "59b7bb705dfc9e261679395a707aa0e1863d48a4a93f55f99fdcc3b7f3f19ed3",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20211203024124-532170 -n pause-20211203024124-532170
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20211203024124-532170 -n pause-20211203024124-532170: exit status 2 (2.604427428s)
-- stdout --
Running
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p pause-20211203024124-532170 logs -n 25
=== CONT TestPause/serial/Pause
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20211203024124-532170 logs -n 25: (12.610027161s)
helpers_test.go:253: TestPause/serial/Pause logs:
-- stdout --
*
* ==> Audit <==
* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| delete | -p | multinode-20211203022621-532170-m03 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:36:38 UTC | Fri, 03 Dec 2021 02:36:40 UTC |
| | multinode-20211203022621-532170-m03 | | | | | |
| delete | -p | multinode-20211203022621-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:36:40 UTC | Fri, 03 Dec 2021 02:36:46 UTC |
| | multinode-20211203022621-532170 | | | | | |
| start | -p | test-preload-20211203023646-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:36:46 UTC | Fri, 03 Dec 2021 02:38:22 UTC |
| | test-preload-20211203023646-532170 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.0 | | | | | |
| ssh | -p | test-preload-20211203023646-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:38:22 UTC | Fri, 03 Dec 2021 02:38:25 UTC |
| | test-preload-20211203023646-532170 | | | | | |
| | -- sudo crictl pull busybox | | | | | |
| start | -p | test-preload-20211203023646-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:38:25 UTC | Fri, 03 Dec 2021 02:39:08 UTC |
| | test-preload-20211203023646-532170 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --wait=true --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.3 | | | | | |
| ssh | -p | test-preload-20211203023646-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:39:08 UTC | Fri, 03 Dec 2021 02:39:09 UTC |
| | test-preload-20211203023646-532170 | | | | | |
| | -- sudo crictl image ls | | | | | |
| delete | -p | test-preload-20211203023646-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:39:09 UTC | Fri, 03 Dec 2021 02:39:12 UTC |
| | test-preload-20211203023646-532170 | | | | | |
| start | -p | scheduled-stop-20211203023912-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:39:12 UTC | Fri, 03 Dec 2021 02:39:53 UTC |
| | scheduled-stop-20211203023912-532170 | | | | | |
| | --memory=2048 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | scheduled-stop-20211203023912-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:39:54 UTC | Fri, 03 Dec 2021 02:39:54 UTC |
| | scheduled-stop-20211203023912-532170 | | | | | |
| | --cancel-scheduled | | | | | |
| stop | -p | scheduled-stop-20211203023912-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:40:20 UTC | Fri, 03 Dec 2021 02:40:56 UTC |
| | scheduled-stop-20211203023912-532170 | | | | | |
| | --schedule 15s | | | | | |
| delete | -p | scheduled-stop-20211203023912-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:05 UTC | Fri, 03 Dec 2021 02:41:10 UTC |
| | scheduled-stop-20211203023912-532170 | | | | | |
| delete | -p | insufficient-storage-20211203024110-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:17 UTC | Fri, 03 Dec 2021 02:41:24 UTC |
| | insufficient-storage-20211203024110-532170 | | | | | |
| start | -p | NoKubernetes-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:24 UTC | Fri, 03 Dec 2021 02:41:32 UTC |
| | NoKubernetes-20211203024124-532170 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| profile | list | minikube | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:33 UTC | Fri, 03 Dec 2021 02:41:33 UTC |
| profile | list --output=json | minikube | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:33 UTC | Fri, 03 Dec 2021 02:41:34 UTC |
| stop | -p | NoKubernetes-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:34 UTC | Fri, 03 Dec 2021 02:41:35 UTC |
| | NoKubernetes-20211203024124-532170 | | | | | |
| start | -p | NoKubernetes-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:35 UTC | Fri, 03 Dec 2021 02:41:53 UTC |
| | NoKubernetes-20211203024124-532170 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | NoKubernetes-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:53 UTC | Fri, 03 Dec 2021 02:41:59 UTC |
| | NoKubernetes-20211203024124-532170 | | | | | |
| delete | -p | kubenet-20211203024159-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:42:00 UTC | Fri, 03 Dec 2021 02:42:00 UTC |
| | kubenet-20211203024159-532170 | | | | | |
| delete | -p | flannel-20211203024200-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:42:01 UTC | Fri, 03 Dec 2021 02:42:01 UTC |
| | flannel-20211203024200-532170 | | | | | |
| delete | -p false-20211203024201-532170 | false-20211203024201-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:42:02 UTC | Fri, 03 Dec 2021 02:42:02 UTC |
| start | -p pause-20211203024124-532170 | pause-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:24 UTC | Fri, 03 Dec 2021 02:42:36 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | offline-containerd-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:24 UTC | Fri, 03 Dec 2021 02:42:45 UTC |
| | offline-containerd-20211203024124-532170 | | | | | |
| | --alsologtostderr -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | offline-containerd-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:42:45 UTC | Fri, 03 Dec 2021 02:42:49 UTC |
| | offline-containerd-20211203024124-532170 | | | | | |
| start | -p pause-20211203024124-532170 | pause-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:42:36 UTC | Fri, 03 Dec 2021 02:42:52 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/12/03 02:42:51
Running on machine: debian-jenkins-agent-9
Binary: Built with gc go1.17.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1203 02:42:51.961276 668072 out.go:297] Setting OutFile to fd 1 ...
I1203 02:42:51.961377 668072 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1203 02:42:51.961383 668072 out.go:310] Setting ErrFile to fd 2...
I1203 02:42:51.961389 668072 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1203 02:42:51.961546 668072 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/bin
I1203 02:42:51.961815 668072 out.go:304] Setting JSON to false
I1203 02:42:52.026252 668072 start.go:112] hostinfo: {"hostname":"debian-jenkins-agent-9","uptime":15933,"bootTime":1638483439,"procs":309,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I1203 02:42:52.026414 668072 start.go:122] virtualization: kvm guest
I1203 02:42:52.028967 668072 out.go:176] * [running-upgrade-20211203024210-532170] minikube v1.24.0 on Debian 9.13 (kvm/amd64)
I1203 02:42:52.030832 668072 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/kubeconfig
I1203 02:42:52.029221 668072 notify.go:174] Checking for updates...
I1203 02:42:52.032438 668072 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64
I1203 02:42:52.034238 668072 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube
I1203 02:42:52.041293 668072 out.go:176] - MINIKUBE_LOCATION=12084
I1203 02:42:52.042326 668072 config.go:176] Loaded profile config "running-upgrade-20211203024210-532170": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1203 02:42:51.197265 663241 pod_ready.go:92] pod "kube-scheduler-pause-20211203024124-532170" in "kube-system" namespace has status "Ready":"True"
I1203 02:42:51.197290 663241 pod_ready.go:81] duration metric: took 398.571463ms waiting for pod "kube-scheduler-pause-20211203024124-532170" in "kube-system" namespace to be "Ready" ...
I1203 02:42:51.197303 663241 pod_ready.go:38] duration metric: took 872.56713ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1203 02:42:51.197329 663241 api_server.go:51] waiting for apiserver process to appear ...
I1203 02:42:51.197370 663241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1203 02:42:51.232106 663241 api_server.go:71] duration metric: took 1.015659557s to wait for apiserver process to appear ...
I1203 02:42:51.232136 663241 api_server.go:87] waiting for apiserver healthz status ...
I1203 02:42:51.232151 663241 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I1203 02:42:51.240577 663241 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
ok
I1203 02:42:51.241642 663241 api_server.go:140] control plane version: v1.22.4
I1203 02:42:51.241663 663241 api_server.go:130] duration metric: took 9.520048ms to wait for apiserver health ...
I1203 02:42:51.241673 663241 system_pods.go:43] waiting for kube-system pods to appear ...
I1203 02:42:51.399913 663241 system_pods.go:59] 8 kube-system pods found
I1203 02:42:51.399953 663241 system_pods.go:61] "coredns-78fcd69978-qwmpd" [7f97c64e-1e68-4356-a1b5-f77f7cc81b24] Running
I1203 02:42:51.399962 663241 system_pods.go:61] "etcd-pause-20211203024124-532170" [f00da2d2-7bda-4c00-8af5-91a118020c28] Running
I1203 02:42:51.399968 663241 system_pods.go:61] "kindnet-n6hkm" [368971a0-2c0b-44f2-b66d-3257c9f080f4] Running
I1203 02:42:51.399974 663241 system_pods.go:61] "kube-apiserver-pause-20211203024124-532170" [bdf80ccc-7fc5-4aae-9267-d2bbe3f31284] Running
I1203 02:42:51.399982 663241 system_pods.go:61] "kube-controller-manager-pause-20211203024124-532170" [87fb4ea5-8e16-4988-98e9-230278ac2373] Running
I1203 02:42:51.399987 663241 system_pods.go:61] "kube-proxy-5xmj2" [9b4f2238-af1f-487f-80ff-a0a932437cee] Running
I1203 02:42:51.399994 663241 system_pods.go:61] "kube-scheduler-pause-20211203024124-532170" [c5cc016b-3a88-43cd-8b4e-4dcef97d1600] Running
I1203 02:42:51.400004 663241 system_pods.go:61] "storage-provisioner" [9aa5302f-010c-4351-96c2-b2485180be47] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1203 02:42:51.400018 663241 system_pods.go:74] duration metric: took 158.339574ms to wait for pod list to return data ...
I1203 02:42:51.400030 663241 default_sa.go:34] waiting for default service account to be created ...
I1203 02:42:51.599179 663241 default_sa.go:45] found service account: "default"
I1203 02:42:51.599209 663241 default_sa.go:55] duration metric: took 199.172303ms for default service account to be created ...
I1203 02:42:51.599221 663241 system_pods.go:116] waiting for k8s-apps to be running ...
I1203 02:42:51.800091 663241 system_pods.go:86] 8 kube-system pods found
I1203 02:42:51.800124 663241 system_pods.go:89] "coredns-78fcd69978-qwmpd" [7f97c64e-1e68-4356-a1b5-f77f7cc81b24] Running
I1203 02:42:51.800134 663241 system_pods.go:89] "etcd-pause-20211203024124-532170" [f00da2d2-7bda-4c00-8af5-91a118020c28] Running
I1203 02:42:51.800140 663241 system_pods.go:89] "kindnet-n6hkm" [368971a0-2c0b-44f2-b66d-3257c9f080f4] Running
I1203 02:42:51.800147 663241 system_pods.go:89] "kube-apiserver-pause-20211203024124-532170" [bdf80ccc-7fc5-4aae-9267-d2bbe3f31284] Running
I1203 02:42:51.800153 663241 system_pods.go:89] "kube-controller-manager-pause-20211203024124-532170" [87fb4ea5-8e16-4988-98e9-230278ac2373] Running
I1203 02:42:51.800160 663241 system_pods.go:89] "kube-proxy-5xmj2" [9b4f2238-af1f-487f-80ff-a0a932437cee] Running
I1203 02:42:51.800166 663241 system_pods.go:89] "kube-scheduler-pause-20211203024124-532170" [c5cc016b-3a88-43cd-8b4e-4dcef97d1600] Running
I1203 02:42:51.800176 663241 system_pods.go:89] "storage-provisioner" [9aa5302f-010c-4351-96c2-b2485180be47] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1203 02:42:51.800193 663241 system_pods.go:126] duration metric: took 200.966966ms to wait for k8s-apps to be running ...
I1203 02:42:51.800204 663241 system_svc.go:44] waiting for kubelet service to be running ....
I1203 02:42:51.800252 663241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1203 02:42:51.813899 663241 system_svc.go:56] duration metric: took 13.683891ms WaitForService to wait for kubelet.
I1203 02:42:51.813931 663241 kubeadm.go:547] duration metric: took 1.597491319s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1203 02:42:51.813963 663241 node_conditions.go:102] verifying NodePressure condition ...
I1203 02:42:51.999851 663241 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
I1203 02:42:51.999884 663241 node_conditions.go:123] node cpu capacity is 8
I1203 02:42:51.999901 663241 node_conditions.go:105] duration metric: took 185.932844ms to run NodePressure ...
I1203 02:42:51.999916 663241 start.go:234] waiting for startup goroutines ...
I1203 02:42:52.097263 663241 start.go:486] kubectl: 1.20.5, cluster: 1.22.4 (minor skew: 2)
I1203 02:42:52.099032 663241 out.go:176]
W1203 02:42:52.099217 663241 out.go:241] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.4.
I1203 02:42:52.102250 663241 out.go:176] - Want kubectl v1.22.4? Try 'minikube kubectl -- get pods -A'
I1203 02:42:52.106560 663241 out.go:176] * Done! kubectl is now configured to use "pause-20211203024124-532170" cluster and "default" namespace by default
I1203 02:42:52.047008 668072 out.go:176] * Kubernetes 1.22.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.4
I1203 02:42:52.047071 668072 driver.go:343] Setting default libvirt URI to qemu:///system
I1203 02:42:52.131601 668072 docker.go:132] docker version: linux-19.03.15
I1203 02:42:52.131709 668072 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1203 02:42:52.319104 668072 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:222 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:true NGoroutines:88 SystemTime:2021-12-03 02:42:52.208265697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1203 02:42:52.319277 668072 docker.go:237] overlay module found
I1203 02:42:52.322582 668072 out.go:176] * Using the docker driver based on existing profile
I1203 02:42:52.322622 668072 start.go:280] selected driver: docker
I1203 02:42:52.322790 668072 start.go:775] validating driver "docker" against &{Name:running-upgrade-20211203024210-532170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20211203024210-532170 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.203 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:}
I1203 02:42:52.322913 668072 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W1203 02:42:52.322962 668072 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W1203 02:42:52.322987 668072 out.go:241] ! Your cgroup does not allow setting memory.
I1203 02:42:52.324913 668072 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I1203 02:42:52.326103 668072 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1203 02:42:52.436205 668072 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:222 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:90 SystemTime:2021-12-03 02:42:52.377793694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
W1203 02:42:52.436318 668072 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W1203 02:42:52.436348 668072 out.go:241] ! Your cgroup does not allow setting memory.
I1203 02:42:52.681871 668072 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I1203 02:42:52.682124 668072 cni.go:93] Creating CNI manager for ""
I1203 02:42:52.682145 668072 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I1203 02:42:52.682160 668072 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I1203 02:42:52.682166 668072 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I1203 02:42:52.682176 668072 start_flags.go:282] config:
{Name:running-upgrade-20211203024210-532170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:running-upgrade-20211203024210-532170 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.203 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:}
I1203 02:42:49.634574 666582 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
I1203 02:42:49.634862 666582 start.go:160] libmachine.API.Create for "force-systemd-env-20211203024249-532170" (driver="docker")
I1203 02:42:49.634895 666582 client.go:168] LocalClient.Create starting
I1203 02:42:49.634994 666582 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/certs/ca.pem
I1203 02:42:49.635030 666582 main.go:130] libmachine: Decoding PEM data...
I1203 02:42:49.635051 666582 main.go:130] libmachine: Parsing certificate...
I1203 02:42:49.635136 666582 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/certs/cert.pem
I1203 02:42:49.635157 666582 main.go:130] libmachine: Decoding PEM data...
I1203 02:42:49.635171 666582 main.go:130] libmachine: Parsing certificate...
I1203 02:42:49.635597 666582 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211203024249-532170 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1203 02:42:49.683508 666582 cli_runner.go:162] docker network inspect force-systemd-env-20211203024249-532170 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1203 02:42:49.683596 666582 network_create.go:254] running [docker network inspect force-systemd-env-20211203024249-532170] to gather additional debugging logs...
I1203 02:42:49.683619 666582 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211203024249-532170
W1203 02:42:49.735645 666582 cli_runner.go:162] docker network inspect force-systemd-env-20211203024249-532170 returned with exit code 1
I1203 02:42:49.735683 666582 network_create.go:257] error running [docker network inspect force-systemd-env-20211203024249-532170]: docker network inspect force-systemd-env-20211203024249-532170: exit status 1
stdout:
[]
stderr:
Error: No such network: force-systemd-env-20211203024249-532170
I1203 02:42:49.735706 666582 network_create.go:259] output of [docker network inspect force-systemd-env-20211203024249-532170]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: force-systemd-env-20211203024249-532170
** /stderr **
I1203 02:42:49.735772 666582 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1203 02:42:49.795450 666582 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-da1d0795a7d3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:57:ce:ae:c0}}
I1203 02:42:49.796578 666582 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0006da360] misses:0}
I1203 02:42:49.796625 666582 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1203 02:42:49.796646 666582 network_create.go:106] attempt to create docker network force-systemd-env-20211203024249-532170 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1203 02:42:49.796703 666582 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211203024249-532170
I1203 02:42:49.890691 666582 network_create.go:90] docker network force-systemd-env-20211203024249-532170 192.168.58.0/24 created
I1203 02:42:49.890734 666582 kic.go:106] calculated static IP "192.168.58.2" for the "force-systemd-env-20211203024249-532170" container
I1203 02:42:49.890817 666582 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1203 02:42:49.943574 666582 cli_runner.go:115] Run: docker volume create force-systemd-env-20211203024249-532170 --label name.minikube.sigs.k8s.io=force-systemd-env-20211203024249-532170 --label created_by.minikube.sigs.k8s.io=true
I1203 02:42:50.008492 666582 oci.go:102] Successfully created a docker volume force-systemd-env-20211203024249-532170
I1203 02:42:50.008584 666582 cli_runner.go:115] Run: docker run --rm --name force-systemd-env-20211203024249-532170-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20211203024249-532170 --entrypoint /usr/bin/test -v force-systemd-env-20211203024249-532170:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1203 02:42:51.009093 666582 cli_runner.go:168] Completed: docker run --rm --name force-systemd-env-20211203024249-532170-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20211203024249-532170 --entrypoint /usr/bin/test -v force-systemd-env-20211203024249-532170:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib: (1.000465905s)
I1203 02:42:51.009136 666582 oci.go:106] Successfully prepared a docker volume force-systemd-env-20211203024249-532170
W1203 02:42:51.009180 666582 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1203 02:42:51.009198 666582 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I1203 02:42:51.009262 666582 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I1203 02:42:51.009271 666582 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime containerd
I1203 02:42:51.009308 666582 kic.go:179] Starting extracting preloaded images to volume ...
I1203 02:42:51.009413 666582 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211203024249-532170:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1203 02:42:51.186825 666582 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-20211203024249-532170 --name force-systemd-env-20211203024249-532170 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20211203024249-532170 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-20211203024249-532170 --network force-systemd-env-20211203024249-532170 --ip 192.168.58.2 --volume force-systemd-env-20211203024249-532170:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c
I1203 02:42:51.807228 666582 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211203024249-532170 --format={{.State.Running}}
I1203 02:42:51.866586 666582 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211203024249-532170 --format={{.State.Status}}
I1203 02:42:51.930312 666582 cli_runner.go:115] Run: docker exec force-systemd-env-20211203024249-532170 stat /var/lib/dpkg/alternatives/iptables
I1203 02:42:52.085168 666582 oci.go:281] the created container "force-systemd-env-20211203024249-532170" has a running status.
I1203 02:42:52.085200 666582 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/machines/force-systemd-env-20211203024249-532170/id_rsa...
I1203 02:42:52.230639 666582 vm_assets.go:155] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/machines/force-systemd-env-20211203024249-532170/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1203 02:42:52.230695 666582 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/machines/force-systemd-env-20211203024249-532170/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1203 02:42:52.970980 666582 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211203024249-532170 --format={{.State.Status}}
I1203 02:42:53.019240 666582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1203 02:42:53.019262 666582 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-20211203024249-532170 chown docker:docker /home/docker/.ssh/authorized_keys]
I1203 02:42:54.811992 668072 out.go:176] * Starting control plane node running-upgrade-20211203024210-532170 in cluster running-upgrade-20211203024210-532170
I1203 02:42:54.812064 668072 cache.go:118] Beginning downloading kic base image for docker with containerd
I1203 02:42:56.382861 668072 out.go:176] * Pulling base image ...
I1203 02:42:56.382923 668072 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1203 02:42:56.383057 668072 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon
I1203 02:42:56.479120 668072 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon, skipping pull
I1203 02:42:56.479149 668072 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 exists in daemon, skipping load
W1203 02:42:56.506752 668072 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.20.0-containerd-overlay2-amd64.tar.lz4 status code: 404
I1203 02:42:56.506930 668072 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/profiles/running-upgrade-20211203024210-532170/config.json ...
I1203 02:42:56.507022 668072 cache.go:107] acquiring lock: {Name:mk3f28e99ceac9e8992f0b3b268ec7281c4e72db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:42:56.507077 668072 cache.go:107] acquiring lock: {Name:mk58b755235faafd792a55ffc678df1b94a68bed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:42:56.507083 668072 cache.go:107] acquiring lock: {Name:mke6a8256f44fd34328c51d78e752dc7a2edad5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:42:56.507192 668072 cache.go:107] acquiring lock: {Name:mk52be53a20a12a6477d4b0e8db57a1c19c9d6b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:42:56.507262 668072 cache.go:107] acquiring lock: {Name:mk7261047b045f5003b0a1499b53e56c4066fef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:42:56.507284 668072 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
I1203 02:42:56.507302 668072 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 exists
I1203 02:42:56.507306 668072 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 271.401µs
I1203 02:42:56.507322 668072 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
I1203 02:42:56.507336 668072 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 exists
I1203 02:42:56.507328 668072 cache.go:96] cache image "k8s.gcr.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 144.759µs
I1203 02:42:56.507345 668072 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 exists
I1203 02:42:56.507350 668072 cache.go:80] save to tar file k8s.gcr.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 succeeded
I1203 02:42:56.507342 668072 cache.go:107] acquiring lock: {Name:mkd85507bf8f9ec9c5c7efbe7e495c72c5a5c6cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:42:56.507348 668072 cache.go:107] acquiring lock: {Name:mke201119b191dfece838cdcc83d982d296b3f34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:42:56.507362 668072 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" took 94.939µs
I1203 02:42:56.507376 668072 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 succeeded
I1203 02:42:56.507360 668072 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0" took 353.673µs
I1203 02:42:56.507385 668072 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 succeeded
I1203 02:42:56.507382 668072 cache.go:107] acquiring lock: {Name:mk807a6c3536c72c72f3be8529e05e94b09807ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:42:56.507402 668072 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
I1203 02:42:56.507403 668072 cache.go:107] acquiring lock: {Name:mk596458ff611e0392f6a3eaa557e2f3e7e8e7a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:42:56.507711 668072 cache.go:96] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 543.549µs
I1203 02:42:56.507754 668072 cache.go:80] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
I1203 02:42:56.507836 668072 cache.go:107] acquiring lock: {Name:mk15948f5fe3546c3f6679e86494b38857a18360 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:42:56.508043 668072 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 exists
I1203 02:42:56.508069 668072 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0" took 666.176µs
I1203 02:42:56.508088 668072 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 succeeded
I1203 02:42:56.508129 668072 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 exists
I1203 02:42:56.508148 668072 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0" took 1.012254ms
I1203 02:42:56.508166 668072 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 succeeded
I1203 02:42:56.508200 668072 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1203 02:42:56.508214 668072 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 837.091µs
I1203 02:42:56.508226 668072 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1203 02:42:56.508257 668072 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 exists
I1203 02:42:56.508269 668072 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0" took 1.174072ms
I1203 02:42:56.508277 668072 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 succeeded
I1203 02:42:56.508303 668072 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
I1203 02:42:56.508315 668072 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 915.402µs
I1203 02:42:56.508331 668072 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
I1203 02:42:56.508341 668072 cache.go:87] Successfully saved all images to host disk.
I1203 02:42:56.811418 668072 cache.go:206] Successfully downloaded all kic artifacts
I1203 02:42:56.811480 668072 start.go:313] acquiring machines lock for running-upgrade-20211203024210-532170: {Name:mkc65e2b625c3fa4d4186ddf14bfaee35a2f74a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:42:56.811721 668072 start.go:317] acquired machines lock for "running-upgrade-20211203024210-532170" in 207.453µs
I1203 02:42:56.811754 668072 start.go:93] Skipping create...Using existing machine configuration
I1203 02:42:56.811761 668072 fix.go:55] fixHost starting:
I1203 02:42:56.812076 668072 cli_runner.go:115] Run: docker container inspect running-upgrade-20211203024210-532170 --format={{.State.Status}}
I1203 02:42:56.869665 668072 fix.go:108] recreateIfNeeded on running-upgrade-20211203024210-532170: state=Running err=<nil>
W1203 02:42:58.017020 668072 fix.go:134] unexpected machine state, will restart: <nil>
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
44ad2903327a4 6e38f40d628db 8 seconds ago Running storage-provisioner 0 9d44abe734bc7
fa412d0bc64fe 8d147537fb7d1 26 seconds ago Running coredns 0 989b0a20cd6bf
9fe05a46928d7 6de166512aa22 39 seconds ago Running kindnet-cni 0 52fd468fab925
b96a30f689aa8 edeff87e48029 39 seconds ago Running kube-proxy 0 1150dc7540b90
da77c4c16ea38 0ce02f92d3e43 About a minute ago Running kube-controller-manager 0 75715f0308eb1
377954dfce779 721ba97f54a65 About a minute ago Running kube-scheduler 0 4b59fc65934a4
c6bb5925aa26b 0048118155842 About a minute ago Running etcd 0 26442a09720c6
7861cc2da74e1 8a5cc299272d9 About a minute ago Running kube-apiserver 0 1a438759f3a6e
*
* ==> containerd <==
* -- Logs begin at Fri 2021-12-03 02:41:27 UTC, end at Fri 2021-12-03 02:43:00 UTC. --
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.222703254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.222716851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.222726647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.222848435Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.222920521Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc00041ffb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPlugin
ConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.5 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.223014188Z" level=info msg="Connect containerd service"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.223064254Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.223824728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.223940610Z" level=info msg="Start subscribing containerd event"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.224031243Z" level=info msg="Start recovering state"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.224569581Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.224644393Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.224702180Z" level=info msg="containerd successfully booted in 0.040989s"
Dec 03 02:42:38 pause-20211203024124-532170 systemd[1]: Started containerd container runtime.
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.310686273Z" level=info msg="Start event monitor"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.310736093Z" level=info msg="Start snapshots syncer"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.310744594Z" level=info msg="Start cni network conf syncer"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.310757514Z" level=info msg="Start streaming server"
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.156621095Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:9aa5302f-010c-4351-96c2-b2485180be47,Namespace:kube-system,Attempt:0,}"
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.186943591Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f pid=2529
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.361766697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:9aa5302f-010c-4351-96c2-b2485180be47,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f\""
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.365150740Z" level=info msg="CreateContainer within sandbox \"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.436686048Z" level=info msg="CreateContainer within sandbox \"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8\""
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.437875321Z" level=info msg="StartContainer for \"44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8\""
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.586425386Z" level=info msg="StartContainer for \"44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8\" returns successfully"
*
* ==> coredns [fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [ +1.203764] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth344f8041
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff de 19 e6 42 b4 2c 08 06 .........B.,..
[Dec 3 02:35] cgroup: cgroup2: unknown option "nsdelegate"
[ +43.145422] cgroup: cgroup2: unknown option "nsdelegate"
[ +1.997001] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev vethae8775a3
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 09 60 d3 6c 6f 08 06 ........`.lo..
[Dec 3 02:36] cgroup: cgroup2: unknown option "nsdelegate"
[Dec 3 02:38] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethc07b7bb4
[ +0.000003] ll header: 00000000: ff ff ff ff ff ff 9a 08 d6 d0 27 5c 08 06 ..........'\..
[Dec 3 02:39] cgroup: cgroup2: unknown option "nsdelegate"
[Dec 3 02:40] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth5d5f7a44
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e b4 3b e2 7b 9d 08 06 ........;.{...
[Dec 3 02:41] cgroup: cgroup2: unknown option "nsdelegate"
[ +14.204723] cgroup: cgroup2: unknown option "nsdelegate"
[ +1.025718] cgroup: cgroup2: unknown option "nsdelegate"
[ +0.991957] cgroup: cgroup2: unknown option "nsdelegate"
[ +19.678388] cgroup: cgroup2: unknown option "nsdelegate"
[ +2.060023] cgroup: cgroup2: unknown option "nsdelegate"
[Dec 3 02:42] cgroup: cgroup2: unknown option "nsdelegate"
[ +13.603429] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth170a53a8
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 4a 38 90 3b cc 08 06 .......J8.;...
[ +9.575101] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethe30be4e2
[ +0.000003] ll header: 00000000: ff ff ff ff ff ff aa eb 14 cd 69 e4 08 06 ..........i...
[ +9.369686] cgroup: cgroup2: unknown option "nsdelegate"
[Dec 3 02:43] cgroup: cgroup2: unknown option "nsdelegate"
*
* ==> etcd [c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83] <==
* {"level":"warn","ts":"2021-12-03T02:42:54.631Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.361717606s","expected-duration":"1s"}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.854169973s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4890"}
{"level":"info","ts":"2021-12-03T02:42:54.852Z","caller":"traceutil/trace.go:171","msg":"trace[725459631] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:529; }","duration":"1.854256104s","start":"2021-12-03T02:42:52.998Z","end":"2021-12-03T02:42:54.852Z","steps":["trace[725459631] 'range keys from in-memory index tree' (duration: 1.854072249s)"],"step_count":1}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.582355422s","expected-duration":"100ms","prefix":"","request":"header:<ID:8128009408346100277 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" mod_revision:510 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" value_size:530 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" > >>","response":"size:16"}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:52.998Z","time spent":"1.854314416s","remote":"127.0.0.1:33866","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":4914,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
{"level":"info","ts":"2021-12-03T02:42:54.852Z","caller":"traceutil/trace.go:171","msg":"trace[615955715] linearizableReadLoop","detail":"{readStateIndex:548; appliedIndex:547; }","duration":"1.188717958s","start":"2021-12-03T02:42:53.664Z","end":"2021-12-03T02:42:54.852Z","steps":["trace[615955715] 'read index received' (duration: 968.013476ms)","trace[615955715] 'applied index is now lower than readState.Index' (duration: 220.703541ms)"],"step_count":2}
{"level":"info","ts":"2021-12-03T02:42:54.852Z","caller":"traceutil/trace.go:171","msg":"trace[1851729177] transaction","detail":"{read_only:false; response_revision:530; number_of_response:1; }","duration":"1.582656878s","start":"2021-12-03T02:42:53.270Z","end":"2021-12-03T02:42:54.852Z","steps":["trace[1851729177] 'compare' (duration: 1.582261694s)"],"step_count":1}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.613711605s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.188840443s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
{"level":"info","ts":"2021-12-03T02:42:54.852Z","caller":"traceutil/trace.go:171","msg":"trace[43474745] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:530; }","duration":"1.188875717s","start":"2021-12-03T02:42:53.664Z","end":"2021-12-03T02:42:54.852Z","steps":["trace[43474745] 'agreement among raft nodes before linearized reading' (duration: 1.188815061s)"],"step_count":1}
{"level":"info","ts":"2021-12-03T02:42:54.852Z","caller":"traceutil/trace.go:171","msg":"trace[1805756956] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:529; }","duration":"1.613993557s","start":"2021-12-03T02:42:53.238Z","end":"2021-12-03T02:42:54.852Z","steps":["trace[1805756956] 'range keys from in-memory index tree' (duration: 1.61359796s)"],"step_count":1}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:53.664Z","time spent":"1.188915825s","remote":"127.0.0.1:33864","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1153,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:53.238Z","time spent":"1.614050289s","remote":"127.0.0.1:33934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2021-12-03T02:42:54.853Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:53.270Z","time spent":"1.582731298s","remote":"127.0.0.1:33918","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":598,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" mod_revision:510 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" value_size:530 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" > >"}
{"level":"warn","ts":"2021-12-03T02:42:55.362Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128009408346100279,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-12-03T02:42:55.862Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128009408346100279,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-12-03T02:42:56.363Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128009408346100279,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-12-03T02:42:56.800Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.939127407s","expected-duration":"1s"}
{"level":"info","ts":"2021-12-03T02:42:56.800Z","caller":"traceutil/trace.go:171","msg":"trace[121078242] linearizableReadLoop","detail":"{readStateIndex:549; appliedIndex:549; }","duration":"1.939141286s","start":"2021-12-03T02:42:54.861Z","end":"2021-12-03T02:42:56.800Z","steps":["trace[121078242] 'read index received' (duration: 1.939133379s)","trace[121078242] 'applied index is now lower than readState.Index' (duration: 6.677µs)"],"step_count":2}
{"level":"warn","ts":"2021-12-03T02:42:56.815Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.953947612s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/kubernetes\" ","response":"range_response_count:1 size:706"}
{"level":"warn","ts":"2021-12-03T02:42:56.815Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.95289822s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2021-12-03T02:42:56.815Z","caller":"traceutil/trace.go:171","msg":"trace[634251822] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:531; }","duration":"1.953003147s","start":"2021-12-03T02:42:54.862Z","end":"2021-12-03T02:42:56.815Z","steps":["trace[634251822] 'agreement among raft nodes before linearized reading' (duration: 1.938259419s)"],"step_count":1}
{"level":"info","ts":"2021-12-03T02:42:56.815Z","caller":"traceutil/trace.go:171","msg":"trace[1182520940] range","detail":"{range_begin:/registry/services/specs/default/kubernetes; range_end:; response_count:1; response_revision:531; }","duration":"1.954019471s","start":"2021-12-03T02:42:54.861Z","end":"2021-12-03T02:42:56.815Z","steps":["trace[1182520940] 'agreement among raft nodes before linearized reading' (duration: 1.93925253s)","trace[1182520940] 'range keys from in-memory index tree' (duration: 14.649038ms)"],"step_count":2}
{"level":"warn","ts":"2021-12-03T02:42:56.815Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:54.862Z","time spent":"1.953050451s","remote":"127.0.0.1:33934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2021-12-03T02:42:56.815Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:54.861Z","time spent":"1.954077543s","remote":"127.0.0.1:33880","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":730,"request content":"key:\"/registry/services/specs/default/kubernetes\" "}
*
* ==> kernel <==
* 02:43:11 up 4:25, 0 users, load average: 9.92, 4.17, 2.19
Linux pause-20211203024124-532170 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4] <==
* Trace[1329700100]: ---"About to write a response" 7966ms (02:42:32.807)
Trace[1329700100]: [7.966737646s] [7.966737646s] END
I1203 02:42:32.808554 1 trace.go:205] Trace[10124701]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20211203024124-532170,user-agent:kubelet/v1.22.4 (linux/amd64) kubernetes/b695d79,audit-id:03c422dd-754f-491f-8b0f-8cda1b614985,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (03-Dec-2021 02:42:30.264) (total time: 2543ms):
Trace[10124701]: ---"About to write a response" 2537ms (02:42:32.801)
Trace[10124701]: [2.543694862s] [2.543694862s] END
I1203 02:42:32.811479 1 trace.go:205] Trace[1566228004]: "GuaranteedUpdate etcd3" type:*coordination.Lease (03-Dec-2021 02:42:31.447) (total time: 1363ms):
Trace[1566228004]: ---"Transaction committed" 1363ms (02:42:32.811)
Trace[1566228004]: [1.363871215s] [1.363871215s] END
I1203 02:42:32.811607 1 trace.go:205] Trace[659618151]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20211203024124-532170,user-agent:kubelet/v1.22.4 (linux/amd64) kubernetes/b695d79,audit-id:7f82b614-c0f3-45b7-a7e7-fd4d532ee2df,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (03-Dec-2021 02:42:31.447) (total time: 1364ms):
Trace[659618151]: ---"Object stored in database" 1363ms (02:42:32.811)
Trace[659618151]: [1.364142405s] [1.364142405s] END
I1203 02:42:54.853636 1 trace.go:205] Trace[921726475]: "GuaranteedUpdate etcd3" type:*coordination.Lease (03-Dec-2021 02:42:53.269) (total time: 1584ms):
Trace[921726475]: ---"Transaction committed" 1583ms (02:42:54.853)
Trace[921726475]: [1.584390803s] [1.584390803s] END
I1203 02:42:54.853975 1 trace.go:205] Trace[719452703]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20211203024124-532170,user-agent:kubelet/v1.22.4 (linux/amd64) kubernetes/b695d79,audit-id:2d516ca3-d159-452f-a840-f129cf10d146,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (03-Dec-2021 02:42:53.269) (total time: 1584ms):
Trace[719452703]: ---"Object stored in database" 1584ms (02:42:54.853)
Trace[719452703]: [1.584903164s] [1.584903164s] END
I1203 02:42:54.854055 1 trace.go:205] Trace[1850500647]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:4359df40-5daa-4a50-bd70-f48390ea66ec,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (03-Dec-2021 02:42:53.663) (total time: 1190ms):
Trace[1850500647]: ---"About to write a response" 1190ms (02:42:54.853)
Trace[1850500647]: [1.190501383s] [1.190501383s] END
I1203 02:42:54.857513 1 trace.go:205] Trace[746149164]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (03-Dec-2021 02:42:52.997) (total time: 1859ms):
Trace[746149164]: [1.859677903s] [1.859677903s] END
I1203 02:42:54.858039 1 trace.go:205] Trace[100308926]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:9ee7eb23-38af-4c85-8170-deb854735830,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (03-Dec-2021 02:42:52.997) (total time: 1860ms):
Trace[100308926]: ---"Listing from storage done" 1859ms (02:42:54.857)
Trace[100308926]: [1.860215353s] [1.860215353s] END
*
* ==> kube-controller-manager [da77c4c16ea386dda39fcf91ff000cf63eb9f9b8cef0af731ccbbe2fe9f197dd] <==
* I1203 02:42:19.598296 1 shared_informer.go:247] Caches are synced for crt configmap
I1203 02:42:19.598317 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I1203 02:42:19.598432 1 shared_informer.go:247] Caches are synced for resource quota
I1203 02:42:19.598469 1 shared_informer.go:247] Caches are synced for PVC protection
I1203 02:42:19.598487 1 shared_informer.go:247] Caches are synced for deployment
I1203 02:42:19.598941 1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20211203024124-532170" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1203 02:42:19.605085 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20211203024124-532170" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1203 02:42:19.623094 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-pause-20211203024124-532170" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1203 02:42:19.629084 1 shared_informer.go:247] Caches are synced for persistent volume
I1203 02:42:19.638354 1 shared_informer.go:247] Caches are synced for namespace
I1203 02:42:19.641965 1 shared_informer.go:247] Caches are synced for service account
I1203 02:42:19.697287 1 shared_informer.go:247] Caches are synced for attach detach
I1203 02:42:19.698168 1 shared_informer.go:247] Caches are synced for PV protection
I1203 02:42:19.698204 1 shared_informer.go:247] Caches are synced for expand
I1203 02:42:19.723295 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5xmj2"
I1203 02:42:19.723336 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n6hkm"
I1203 02:42:19.723690 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
I1203 02:42:19.800345 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-4z8xc"
I1203 02:42:19.829969 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-qwmpd"
I1203 02:42:20.016290 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
I1203 02:42:20.026075 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-4z8xc"
I1203 02:42:20.097398 1 shared_informer.go:247] Caches are synced for garbage collector
I1203 02:42:20.177064 1 shared_informer.go:247] Caches are synced for garbage collector
I1203 02:42:20.177104 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1203 02:42:34.546360 1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
*
* ==> kube-proxy [b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf] <==
* I1203 02:42:20.634200 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I1203 02:42:20.634302 1 server_others.go:140] Detected node IP 192.168.49.2
W1203 02:42:20.634321 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
I1203 02:42:21.018258 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I1203 02:42:21.018308 1 server_others.go:212] Using iptables Proxier.
I1203 02:42:21.018322 1 server_others.go:219] creating dualStackProxier for iptables.
W1203 02:42:21.018342 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I1203 02:42:21.018662 1 server.go:649] Version: v1.22.4
I1203 02:42:21.020100 1 config.go:224] Starting endpoint slice config controller
I1203 02:42:21.020120 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1203 02:42:21.020193 1 config.go:315] Starting service config controller
I1203 02:42:21.020198 1 shared_informer.go:240] Waiting for caches to sync for service config
I1203 02:42:21.121119 1 shared_informer.go:247] Caches are synced for service config
I1203 02:42:21.121195 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac] <==
* W1203 02:42:03.294201 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1203 02:42:03.406171 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I1203 02:42:03.406300 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1203 02:42:03.406324 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1203 02:42:03.406344 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E1203 02:42:03.407679 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1203 02:42:03.415896 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1203 02:42:03.416427 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E1203 02:42:03.418623 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1203 02:42:03.421833 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1203 02:42:03.422567 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1203 02:42:03.422726 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1203 02:42:03.422862 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1203 02:42:03.423000 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1203 02:42:03.423347 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1203 02:42:03.423470 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E1203 02:42:03.423544 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1203 02:42:03.423558 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1203 02:42:03.423644 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E1203 02:42:03.424827 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1203 02:42:04.333813 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1203 02:42:04.364124 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1203 02:42:04.365071 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1203 02:42:04.472526 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
I1203 02:42:04.906826 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Fri 2021-12-03 02:41:27 UTC, end at Fri 2021-12-03 02:43:12 UTC. --
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.759016 1250 topology_manager.go:200] "Topology Admit Handler"
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.769175 1250 topology_manager.go:200] "Topology Admit Handler"
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862569 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/368971a0-2c0b-44f2-b66d-3257c9f080f4-cni-cfg\") pod \"kindnet-n6hkm\" (UID: \"368971a0-2c0b-44f2-b66d-3257c9f080f4\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862653 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b4f2238-af1f-487f-80ff-a0a932437cee-lib-modules\") pod \"kube-proxy-5xmj2\" (UID: \"9b4f2238-af1f-487f-80ff-a0a932437cee\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862702 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368971a0-2c0b-44f2-b66d-3257c9f080f4-xtables-lock\") pod \"kindnet-n6hkm\" (UID: \"368971a0-2c0b-44f2-b66d-3257c9f080f4\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862735 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368971a0-2c0b-44f2-b66d-3257c9f080f4-lib-modules\") pod \"kindnet-n6hkm\" (UID: \"368971a0-2c0b-44f2-b66d-3257c9f080f4\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862769 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b4f2238-af1f-487f-80ff-a0a932437cee-xtables-lock\") pod \"kube-proxy-5xmj2\" (UID: \"9b4f2238-af1f-487f-80ff-a0a932437cee\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862848 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b4f2238-af1f-487f-80ff-a0a932437cee-kube-proxy\") pod \"kube-proxy-5xmj2\" (UID: \"9b4f2238-af1f-487f-80ff-a0a932437cee\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862905 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fj6q\" (UniqueName: \"kubernetes.io/projected/368971a0-2c0b-44f2-b66d-3257c9f080f4-kube-api-access-6fj6q\") pod \"kindnet-n6hkm\" (UID: \"368971a0-2c0b-44f2-b66d-3257c9f080f4\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862938 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjt4z\" (UniqueName: \"kubernetes.io/projected/9b4f2238-af1f-487f-80ff-a0a932437cee-kube-api-access-cjt4z\") pod \"kube-proxy-5xmj2\" (UID: \"9b4f2238-af1f-487f-80ff-a0a932437cee\") "
Dec 03 02:42:21 pause-20211203024124-532170 kubelet[1250]: E1203 02:42:21.617953 1250 kubelet.go:2337] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 03 02:42:32 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:32.825477 1250 topology_manager.go:200] "Topology Admit Handler"
Dec 03 02:42:32 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:32.961188 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f97c64e-1e68-4356-a1b5-f77f7cc81b24-config-volume\") pod \"coredns-78fcd69978-qwmpd\" (UID: \"7f97c64e-1e68-4356-a1b5-f77f7cc81b24\") "
Dec 03 02:42:32 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:32.961235 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt727\" (UniqueName: \"kubernetes.io/projected/7f97c64e-1e68-4356-a1b5-f77f7cc81b24-kube-api-access-wt727\") pod \"coredns-78fcd69978-qwmpd\" (UID: \"7f97c64e-1e68-4356-a1b5-f77f7cc81b24\") "
Dec 03 02:42:38 pause-20211203024124-532170 kubelet[1250]: W1203 02:42:38.155456 1250 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock /run/containerd/containerd.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
Dec 03 02:42:38 pause-20211203024124-532170 kubelet[1250]: W1203 02:42:38.155462 1250 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock /run/containerd/containerd.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
Dec 03 02:42:38 pause-20211203024124-532170 kubelet[1250]: E1203 02:42:38.524995 1250 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
Dec 03 02:42:38 pause-20211203024124-532170 kubelet[1250]: E1203 02:42:38.525058 1250 kuberuntime_sandbox.go:281] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
Dec 03 02:42:38 pause-20211203024124-532170 kubelet[1250]: E1203 02:42:38.525083 1250 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
Dec 03 02:42:50 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:50.840523 1250 topology_manager.go:200] "Topology Admit Handler"
Dec 03 02:42:50 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:50.997439 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9aa5302f-010c-4351-96c2-b2485180be47-tmp\") pod \"storage-provisioner\" (UID: \"9aa5302f-010c-4351-96c2-b2485180be47\") "
Dec 03 02:42:50 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:50.997691 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfl2n\" (UniqueName: \"kubernetes.io/projected/9aa5302f-010c-4351-96c2-b2485180be47-kube-api-access-cfl2n\") pod \"storage-provisioner\" (UID: \"9aa5302f-010c-4351-96c2-b2485180be47\") "
Dec 03 02:42:54 pause-20211203024124-532170 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Dec 03 02:42:54 pause-20211203024124-532170 systemd[1]: kubelet.service: Succeeded.
Dec 03 02:42:54 pause-20211203024124-532170 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
*
* ==> storage-provisioner [44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8] <==
* I1203 02:42:51.601493 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1203 02:42:51.615295 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1203 02:42:51.615709 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1203 02:42:51.641608 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1203 02:42:51.641868 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20211203024124-532170_e895639e-54d7-41f8-ba55-6be5b703777d!
I1203 02:42:51.643094 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbc908e7-bac3-41f4-ad8a-28a0acbab933", APIVersion:"v1", ResourceVersion:"527", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20211203024124-532170_e895639e-54d7-41f8-ba55-6be5b703777d became leader
I1203 02:42:51.742125 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20211203024124-532170_e895639e-54d7-41f8-ba55-6be5b703777d!
-- /stdout --
** stderr **
E1203 02:43:10.253042 669036 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20211203024124-532170 -n pause-20211203024124-532170
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20211203024124-532170 -n pause-20211203024124-532170: exit status 2 (512.281465ms)
-- stdout --
Paused
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:257: "pause-20211203024124-532170" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======> post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run: docker inspect pause-20211203024124-532170
helpers_test.go:236: (dbg) docker inspect pause-20211203024124-532170:
-- stdout --
[
{
"Id": "2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc",
"Created": "2021-12-03T02:41:26.275196666Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 651787,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-12-03T02:41:27.094995507Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:e2a6c047beddf8261495222adf87089305bbc18e350587b01ebe3725535b5871",
"ResolvConfPath": "/var/lib/docker/containers/2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc/hostname",
"HostsPath": "/var/lib/docker/containers/2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc/hosts",
"LogPath": "/var/lib/docker/containers/2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc/2fc356be3f9aeb37d7f720969ef19bf8110acac88780406b5d2bec139f1913fc-json.log",
"Name": "/pause-20211203024124-532170",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-20211203024124-532170:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-20211203024124-532170",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [
{
"PathOnHost": "/dev/fuse",
"PathInContainer": "/dev/fuse",
"CgroupPermissions": "rwm"
}
],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/7b5a1f59d5810bced6fa94f13fccc8233a0a812da9bb925602d8108fc5966bbf-init/diff:/var/lib/docker/overlay2/6ac4080541cfeacde0b54b862ed3e8c1497bf3d3f22b73f3dd4ce1f668e19c72/diff:/var/lib/docker/overlay2/3e23ff251c4a4f7c2bc4e503098b4a2c6cc61a1d99b51a599fab585fe722bb1b/diff:/var/lib/docker/overlay2/cc6e93735a0ca1de72f287a78429adcbf52754f7b56e82fd10136b36df126cb7/diff:/var/lib/docker/overlay2/9b91525d2aa18b23a08318fd216cf0dd395530e8ee0971ef4ee837687f222c44/diff:/var/lib/docker/overlay2/afd7645dd1e3de615d5e6b8e38717640c35dd9d0dbad5d79e1e72c29f519e047/diff:/var/lib/docker/overlay2/e42185b4ffd5fd72d5dd3032240c65ebde359f8e05c73af3d0187f32c837a786/diff:/var/lib/docker/overlay2/fc00a8e7d591e187ad76a50b07d8c5a06e15c107d4f840ccea15b68e28da1980/diff:/var/lib/docker/overlay2/a29bf1481ece426838b6a498b3effdf9ff07fce93c547917247a80b257a1fe6a/diff:/var/lib/docker/overlay2/d08861a8de693171af3100608e63638de0eff4ee0ab9dad23a141b63c9fc78ba/diff:/var/lib/docker/overlay2/a810fd
33b8bdb6290817982a645db1f3f53d332f221af803394694d926de6ca1/diff:/var/lib/docker/overlay2/82d5a8bdd562d849d98ab607f16a3a733ccc7b8f078412597b1324d4fadd3be1/diff:/var/lib/docker/overlay2/1a93d862337e75ae7f4350d3b11066c9de4c24aceee76546de610057f617281d/diff:/var/lib/docker/overlay2/016cb2f9bec981cb59ea9c574c581bad8e88da405748168504b409ab365eb79b/diff:/var/lib/docker/overlay2/83e41d3d061b2ae009fb28d0223691d1ec84632e4b835f49eec164d4466dae2d/diff:/var/lib/docker/overlay2/0ce62a53786be1b733f4d9a3f0bb5d6eddb1df23134f5b87286ea0a0cf9fbbf0/diff:/var/lib/docker/overlay2/c13141fe83f4579fcc3d40c499982793faebda827f5ca6cc5534911b1ec393f6/diff:/var/lib/docker/overlay2/55fce7735b2305346d30a554960b07ae732bacdfbc399692dd2506c80daf9877/diff:/var/lib/docker/overlay2/252a160a3b32162d3ff13c40ccc33cabd273340235797ebee736493a1029eda7/diff:/var/lib/docker/overlay2/fce6704ef3f6cc37735e8972372e0f5e0bfbf4af83f5ff6fd027e174c59578e0/diff:/var/lib/docker/overlay2/88c75fcb26b5194e243c651225b00fb638c5ea62eb979d7df5b728df5ef5195a/diff:/var/lib/d
ocker/overlay2/0771e6e29be058dc4b8a5521020f5d66e3e4f0996747baea7cb18371e3545b6b/diff:/var/lib/docker/overlay2/9b15999e93e34bb79fccce68ef9de2dfe1780f1d4c0595f52d15981d7090babb/diff:/var/lib/docker/overlay2/8fd40ec7570ab0690e15c09834e3bc5284b3badf0f5b98cf234cc063d023fbce/diff:/var/lib/docker/overlay2/317d010d230847ac38e52570cb1bf66b55a735ba2fcd26045ca1d928fee269e9/diff:/var/lib/docker/overlay2/9e0744f6558a5e30811c7bc770686bb03018000ec5504f0a7342d648858c6520/diff:/var/lib/docker/overlay2/2b1e4978e05c0bbd41e0db471a4546b47e88d4d07bba062e641ee19252fbe324/diff:/var/lib/docker/overlay2/8a5ad66f897aa93b3a59cb81de4d6f3a4437cdf29b1daaac0470479e0627d353/diff:/var/lib/docker/overlay2/4c73a8fc02a1854c714f156081c2ed75e595e3dde9226820f21ff9353b52fbb7/diff:/var/lib/docker/overlay2/be02751327b17c0e71631c5e6cbea324ca807f35ff112d906955328aabd427d8/diff:/var/lib/docker/overlay2/500480e16cba6dcc737fbaabcd657d62a6d877b3505eec63f1de6e5e3c9dbb92/diff:/var/lib/docker/overlay2/2cd99ae5cfb49d19c25b95ac17d84a6889e691672d6d93a9f066ca8b7c4
5289f/diff:/var/lib/docker/overlay2/bc1787830a034113efe9235ddf0dc8652dfff6e7926a63d839120bbe9ebc0b99/diff:/var/lib/docker/overlay2/44149fe8e3297368fee684058e3d52cef2712454b3145aa883bcb63bbae8542f/diff:/var/lib/docker/overlay2/433b47ea5ada41625d6479daf7944dc706ab5abbcad92f65a32b40a754cbf645/diff:/var/lib/docker/overlay2/77082cd9d165ff33aaddc632231ac2ea8dc30f9073037fc9c3c97f50db12d5b2/diff:/var/lib/docker/overlay2/631d81e1d8b807f0a941170215bfd43566c2823909251d69246f84dc74dff425/diff:/var/lib/docker/overlay2/6eb234f85614a81f5323a7f51a68384027e94afd402c9851374d990371ebf594/diff:/var/lib/docker/overlay2/8c188ae964eaa37da1b63d82bc30f6d81e8a06ee2c1af3f12719f9cf443b6e09/diff:/var/lib/docker/overlay2/86d4eaadf1b0680172d06e3118a53a77440af3c8e5bde8c29bb97e7f94599244/diff:/var/lib/docker/overlay2/78fadeb69e3f7825d1bad64e2e686eff962fc4ed859de8fd0d2b2d30d56510a1/diff",
"MergedDir": "/var/lib/docker/overlay2/7b5a1f59d5810bced6fa94f13fccc8233a0a812da9bb925602d8108fc5966bbf/merged",
"UpperDir": "/var/lib/docker/overlay2/7b5a1f59d5810bced6fa94f13fccc8233a0a812da9bb925602d8108fc5966bbf/diff",
"WorkDir": "/var/lib/docker/overlay2/7b5a1f59d5810bced6fa94f13fccc8233a0a812da9bb925602d8108fc5966bbf/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "pause-20211203024124-532170",
"Source": "/var/lib/docker/volumes/pause-20211203024124-532170/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "pause-20211203024124-532170",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-20211203024124-532170",
"name.minikube.sigs.k8s.io": "pause-20211203024124-532170",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "c1242b1b033f16bedee595a0280553c95576e94729234572b5dbfcbd58cee8ec",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33286"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33285"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33282"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33284"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33283"
}
]
},
"SandboxKey": "/var/run/docker/netns/c1242b1b033f",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-20211203024124-532170": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"2fc356be3f9a"
],
"NetworkID": "da1d0795a7d3bb5f007c696fba96d61b49ce95f6e99122ac37e1226dd45b8f38",
"EndpointID": "59b7bb705dfc9e261679395a707aa0e1863d48a4a93f55f99fdcc3b7f3f19ed3",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20211203024124-532170 -n pause-20211203024124-532170
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20211203024124-532170 -n pause-20211203024124-532170: exit status 2 (521.608995ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p pause-20211203024124-532170 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p pause-20211203024124-532170 logs -n 25: (11.577067765s)
helpers_test.go:253: TestPause/serial/Pause logs:
-- stdout --
*
* ==> Audit <==
* |---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| delete | -p | multinode-20211203022621-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:36:40 UTC | Fri, 03 Dec 2021 02:36:46 UTC |
| | multinode-20211203022621-532170 | | | | | |
| start | -p | test-preload-20211203023646-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:36:46 UTC | Fri, 03 Dec 2021 02:38:22 UTC |
| | test-preload-20211203023646-532170 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.0 | | | | | |
| ssh | -p | test-preload-20211203023646-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:38:22 UTC | Fri, 03 Dec 2021 02:38:25 UTC |
| | test-preload-20211203023646-532170 | | | | | |
| | -- sudo crictl pull busybox | | | | | |
| start | -p | test-preload-20211203023646-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:38:25 UTC | Fri, 03 Dec 2021 02:39:08 UTC |
| | test-preload-20211203023646-532170 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --wait=true --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.17.3 | | | | | |
| ssh | -p | test-preload-20211203023646-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:39:08 UTC | Fri, 03 Dec 2021 02:39:09 UTC |
| | test-preload-20211203023646-532170 | | | | | |
| | -- sudo crictl image ls | | | | | |
| delete | -p | test-preload-20211203023646-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:39:09 UTC | Fri, 03 Dec 2021 02:39:12 UTC |
| | test-preload-20211203023646-532170 | | | | | |
| start | -p | scheduled-stop-20211203023912-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:39:12 UTC | Fri, 03 Dec 2021 02:39:53 UTC |
| | scheduled-stop-20211203023912-532170 | | | | | |
| | --memory=2048 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | -p | scheduled-stop-20211203023912-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:39:54 UTC | Fri, 03 Dec 2021 02:39:54 UTC |
| | scheduled-stop-20211203023912-532170 | | | | | |
| | --cancel-scheduled | | | | | |
| stop | -p | scheduled-stop-20211203023912-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:40:20 UTC | Fri, 03 Dec 2021 02:40:56 UTC |
| | scheduled-stop-20211203023912-532170 | | | | | |
| | --schedule 15s | | | | | |
| delete | -p | scheduled-stop-20211203023912-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:05 UTC | Fri, 03 Dec 2021 02:41:10 UTC |
| | scheduled-stop-20211203023912-532170 | | | | | |
| delete | -p | insufficient-storage-20211203024110-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:17 UTC | Fri, 03 Dec 2021 02:41:24 UTC |
| | insufficient-storage-20211203024110-532170 | | | | | |
| start | -p | NoKubernetes-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:24 UTC | Fri, 03 Dec 2021 02:41:32 UTC |
| | NoKubernetes-20211203024124-532170 | | | | | |
| | --no-kubernetes --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| profile | list | minikube | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:33 UTC | Fri, 03 Dec 2021 02:41:33 UTC |
| profile | list --output=json | minikube | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:33 UTC | Fri, 03 Dec 2021 02:41:34 UTC |
| stop | -p | NoKubernetes-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:34 UTC | Fri, 03 Dec 2021 02:41:35 UTC |
| | NoKubernetes-20211203024124-532170 | | | | | |
| start | -p | NoKubernetes-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:35 UTC | Fri, 03 Dec 2021 02:41:53 UTC |
| | NoKubernetes-20211203024124-532170 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | NoKubernetes-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:53 UTC | Fri, 03 Dec 2021 02:41:59 UTC |
| | NoKubernetes-20211203024124-532170 | | | | | |
| delete | -p | kubenet-20211203024159-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:42:00 UTC | Fri, 03 Dec 2021 02:42:00 UTC |
| | kubenet-20211203024159-532170 | | | | | |
| delete | -p | flannel-20211203024200-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:42:01 UTC | Fri, 03 Dec 2021 02:42:01 UTC |
| | flannel-20211203024200-532170 | | | | | |
| delete | -p false-20211203024201-532170 | false-20211203024201-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:42:02 UTC | Fri, 03 Dec 2021 02:42:02 UTC |
| start | -p pause-20211203024124-532170 | pause-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:24 UTC | Fri, 03 Dec 2021 02:42:36 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | offline-containerd-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:41:24 UTC | Fri, 03 Dec 2021 02:42:45 UTC |
| | offline-containerd-20211203024124-532170 | | | | | |
| | --alsologtostderr -v=1 --memory=2048 | | | | | |
| | --wait=true --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | offline-containerd-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:42:45 UTC | Fri, 03 Dec 2021 02:42:49 UTC |
| | offline-containerd-20211203024124-532170 | | | | | |
| start | -p pause-20211203024124-532170 | pause-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:42:36 UTC | Fri, 03 Dec 2021 02:42:52 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| -p | pause-20211203024124-532170 | pause-20211203024124-532170 | jenkins | v1.24.0 | Fri, 03 Dec 2021 02:42:59 UTC | Fri, 03 Dec 2021 02:43:12 UTC |
| | logs -n 25 | | | | | |
|---------|--------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/12/03 02:43:05
Running on machine: debian-jenkins-agent-9
Binary: Built with gc go1.17.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1203 02:43:05.053241 670481 out.go:297] Setting OutFile to fd 1 ...
I1203 02:43:05.053349 670481 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1203 02:43:05.053360 670481 out.go:310] Setting ErrFile to fd 2...
I1203 02:43:05.053366 670481 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1203 02:43:05.053498 670481 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/bin
I1203 02:43:05.053757 670481 out.go:304] Setting JSON to false
I1203 02:43:05.092504 670481 start.go:112] hostinfo: {"hostname":"debian-jenkins-agent-9","uptime":15946,"bootTime":1638483439,"procs":273,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
I1203 02:43:05.092647 670481 start.go:122] virtualization: kvm guest
I1203 02:43:05.095532 670481 out.go:176] * [stopped-upgrade-20211203024124-532170] minikube v1.24.0 on Debian 9.13 (kvm/amd64)
I1203 02:43:05.097255 670481 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/kubeconfig
I1203 02:43:05.095733 670481 notify.go:174] Checking for updates...
I1203 02:43:05.098850 670481 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64
I1203 02:43:05.100471 670481 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube
I1203 02:43:05.101957 670481 out.go:176] - MINIKUBE_LOCATION=12084
I1203 02:43:05.102446 670481 config.go:176] Loaded profile config "stopped-upgrade-20211203024124-532170": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1203 02:43:05.105412 670481 out.go:176] * Kubernetes 1.22.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.4
I1203 02:43:05.105461 670481 driver.go:343] Setting default libvirt URI to qemu:///system
I1203 02:43:05.165355 670481 docker.go:132] docker version: linux-19.03.15
I1203 02:43:05.165442 670481 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1203 02:43:05.259347 670481 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:222 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:50 SystemTime:2021-12-03 02:43:05.203811101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1203 02:43:05.259486 670481 docker.go:237] overlay module found
I1203 02:43:05.261650 670481 out.go:176] * Using the docker driver based on existing profile
I1203 02:43:05.261677 670481 start.go:280] selected driver: docker
I1203 02:43:05.261683 670481 start.go:775] validating driver "docker" against &{Name:stopped-upgrade-20211203024124-532170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-20211203024124-532170 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.203 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:}
I1203 02:43:05.261780 670481 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W1203 02:43:05.261826 670481 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W1203 02:43:05.261849 670481 out.go:241] ! Your cgroup does not allow setting memory.
I1203 02:43:05.263387 670481 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I1203 02:43:05.264549 670481 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1203 02:43:05.366635 670481 info.go:263] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:222 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:50 SystemTime:2021-12-03 02:43:05.304972308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-16-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742200832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
W1203 02:43:05.366782 670481 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W1203 02:43:05.366817 670481 out.go:241] ! Your cgroup does not allow setting memory.
I1203 02:43:05.368741 670481 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I1203 02:43:05.368842 670481 cni.go:93] Creating CNI manager for ""
I1203 02:43:05.368861 670481 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I1203 02:43:05.368874 670481 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I1203 02:43:05.368883 670481 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I1203 02:43:05.368895 670481 start_flags.go:282] config:
{Name:stopped-upgrade-20211203024124-532170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-20211203024124-532170 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.203 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:}
I1203 02:43:05.370782 670481 out.go:176] * Starting control plane node stopped-upgrade-20211203024124-532170 in cluster stopped-upgrade-20211203024124-532170
I1203 02:43:05.370819 670481 cache.go:118] Beginning downloading kic base image for docker with containerd
I1203 02:43:05.372360 670481 out.go:176] * Pulling base image ...
I1203 02:43:05.372395 670481 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1203 02:43:05.372503 670481 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon
W1203 02:43:05.483728 670481 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.20.0-containerd-overlay2-amd64.tar.lz4 status code: 404
I1203 02:43:05.483888 670481 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/profiles/stopped-upgrade-20211203024124-532170/config.json ...
I1203 02:43:05.483967 670481 cache.go:107] acquiring lock: {Name:mk58b755235faafd792a55ffc678df1b94a68bed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:43:05.484055 670481 cache.go:107] acquiring lock: {Name:mk52be53a20a12a6477d4b0e8db57a1c19c9d6b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:43:05.484609 670481 cache.go:107] acquiring lock: {Name:mk7261047b045f5003b0a1499b53e56c4066fef9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:43:05.484626 670481 cache.go:107] acquiring lock: {Name:mk3f28e99ceac9e8992f0b3b268ec7281c4e72db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:43:05.484665 670481 cache.go:107] acquiring lock: {Name:mk15948f5fe3546c3f6679e86494b38857a18360 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:43:05.483973 670481 cache.go:107] acquiring lock: {Name:mkd85507bf8f9ec9c5c7efbe7e495c72c5a5c6cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:43:05.484643 670481 cache.go:107] acquiring lock: {Name:mke6a8256f44fd34328c51d78e752dc7a2edad5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:43:05.484757 670481 cache.go:107] acquiring lock: {Name:mke201119b191dfece838cdcc83d982d296b3f34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:43:05.484901 670481 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
I1203 02:43:05.484920 670481 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 exists
I1203 02:43:05.484932 670481 cache.go:107] acquiring lock: {Name:mk596458ff611e0392f6a3eaa557e2f3e7e8e7a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:43:05.484952 670481 cache.go:96] cache image "k8s.gcr.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 902.757µs
I1203 02:43:05.484975 670481 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 exists
I1203 02:43:05.484973 670481 cache.go:80] save to tar file k8s.gcr.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 succeeded
I1203 02:43:05.484933 670481 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 976.746µs
I1203 02:43:05.484992 670481 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
I1203 02:43:05.484991 670481 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" took 942.364µs
I1203 02:43:05.484947 670481 cache.go:107] acquiring lock: {Name:mk807a6c3536c72c72f3be8529e05e94b09807ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:43:05.485015 670481 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 succeeded
I1203 02:43:05.485038 670481 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 exists
I1203 02:43:05.485053 670481 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1203 02:43:05.485059 670481 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0" took 1.026757ms
I1203 02:43:05.485076 670481 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 succeeded
I1203 02:43:05.485074 670481 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.015687ms
I1203 02:43:05.485091 670481 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1203 02:43:05.485032 670481 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
I1203 02:43:05.485106 670481 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
I1203 02:43:05.485119 670481 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 191.686µs
I1203 02:43:05.485134 670481 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
I1203 02:43:05.485056 670481 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 exists
I1203 02:43:05.485132 670481 cache.go:96] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 495.648µs
I1203 02:43:05.485145 670481 cache.go:80] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
I1203 02:43:05.485160 670481 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0" took 541.414µs
I1203 02:43:05.485181 670481 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 succeeded
I1203 02:43:05.485238 670481 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 exists
I1203 02:43:05.485254 670481 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 exists
I1203 02:43:05.485266 670481 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0" took 1.306673ms
I1203 02:43:05.485270 670481 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0" took 531.113µs
I1203 02:43:05.485285 670481 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 succeeded
I1203 02:43:05.485297 670481 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 succeeded
I1203 02:43:05.485305 670481 cache.go:87] Successfully saved all images to host disk.
I1203 02:43:05.490644 670481 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon, skipping pull
I1203 02:43:05.490682 670481 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 exists in daemon, skipping load
I1203 02:43:05.490695 670481 cache.go:206] Successfully downloaded all kic artifacts
I1203 02:43:05.490725 670481 start.go:313] acquiring machines lock for stopped-upgrade-20211203024124-532170: {Name:mk214bf8f7074c905cdda6d55d9ae0d3cdd88bb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1203 02:43:05.490784 670481 start.go:317] acquired machines lock for "stopped-upgrade-20211203024124-532170" in 46.889µs
I1203 02:43:05.490801 670481 start.go:93] Skipping create...Using existing machine configuration
I1203 02:43:05.490809 670481 fix.go:55] fixHost starting:
I1203 02:43:05.491017 670481 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20211203024124-532170 --format={{.State.Status}}
I1203 02:43:05.543776 670481 fix.go:108] recreateIfNeeded on stopped-upgrade-20211203024124-532170: state=Stopped err=<nil>
W1203 02:43:05.543831 670481 fix.go:134] unexpected machine state, will restart: <nil>
I1203 02:43:01.948859 668072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4yIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
I1203 02:43:01.966266 668072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1203 02:43:01.975871 668072 crio.go:138] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1203 02:43:01.975944 668072 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1203 02:43:01.987558 668072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1203 02:43:01.994631 668072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1203 02:43:02.088178 668072 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1203 02:43:02.199756 668072 start.go:403] Will wait 60s for socket path /run/containerd/containerd.sock
I1203 02:43:02.199827 668072 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1203 02:43:02.209378 668072 start.go:424] Will wait 60s for crictl version
I1203 02:43:02.209447 668072 ssh_runner.go:195] Run: sudo crictl version
I1203 02:43:02.242429 668072 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2021-12-03T02:43:02Z" level=fatal msg="getting the runtime version failed: rpc error: code = Unknown desc = server is not initialized yet"
I1203 02:43:05.546133 670481 out.go:176] * Restarting existing docker container for "stopped-upgrade-20211203024124-532170" ...
I1203 02:43:05.546230 670481 cli_runner.go:115] Run: docker start stopped-upgrade-20211203024124-532170
I1203 02:43:06.204600 670481 cli_runner.go:115] Run: docker container inspect stopped-upgrade-20211203024124-532170 --format={{.State.Status}}
I1203 02:43:06.258790 670481 kic.go:420] container "stopped-upgrade-20211203024124-532170" state is running.
I1203 02:43:06.259210 670481 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20211203024124-532170
I1203 02:43:06.310989 670481 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/profiles/stopped-upgrade-20211203024124-532170/config.json ...
I1203 02:43:06.311197 670481 machine.go:88] provisioning docker machine ...
I1203 02:43:06.311219 670481 ubuntu.go:169] provisioning hostname "stopped-upgrade-20211203024124-532170"
I1203 02:43:06.311260 670481 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20211203024124-532170
I1203 02:43:06.364687 670481 main.go:130] libmachine: Using SSH client type: native
I1203 02:43:06.364915 670481 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0200] 0x7a32e0 <nil> [] 0s} 127.0.0.1 33318 <nil> <nil>}
I1203 02:43:06.364935 670481 main.go:130] libmachine: About to run SSH command:
sudo hostname stopped-upgrade-20211203024124-532170 && echo "stopped-upgrade-20211203024124-532170" | sudo tee /etc/hostname
I1203 02:43:06.365539 670481 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53072->127.0.0.1:33318: read: connection reset by peer
I1203 02:43:09.510602 670481 main.go:130] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-20211203024124-532170
I1203 02:43:09.510704 670481 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-20211203024124-532170
I1203 02:43:09.563359 670481 main.go:130] libmachine: Using SSH client type: native
I1203 02:43:09.563558 670481 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0200] 0x7a32e0 <nil> [] 0s} 127.0.0.1 33318 <nil> <nil>}
I1203 02:43:09.563581 670481 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\sstopped-upgrade-20211203024124-532170' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-20211203024124-532170/g' /etc/hosts;
else
echo '127.0.1.1 stopped-upgrade-20211203024124-532170' | sudo tee -a /etc/hosts;
fi
fi
I1203 02:43:09.688295 670481 main.go:130] libmachine: SSH cmd err, output: <nil>:
I1203 02:43:09.688344 670481 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube}
I1203 02:43:09.688368 670481 ubuntu.go:177] setting up certificates
I1203 02:43:09.688379 670481 provision.go:83] configureAuth start
I1203 02:43:09.688432 670481 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-20211203024124-532170
I1203 02:43:09.738851 670481 provision.go:138] copyHostCerts
I1203 02:43:09.738917 670481 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/ca.pem, removing ...
I1203 02:43:09.738931 670481 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/ca.pem
I1203 02:43:09.738989 670481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/ca.pem (1082 bytes)
I1203 02:43:09.739126 670481 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cert.pem, removing ...
I1203 02:43:09.739145 670481 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cert.pem
I1203 02:43:09.739173 670481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/cert.pem (1123 bytes)
I1203 02:43:09.739266 670481 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/key.pem, removing ...
I1203 02:43:09.739280 670481 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/key.pem
I1203 02:43:09.739305 670481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/key.pem (1675 bytes)
I1203 02:43:09.739379 670481 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12084-528713-c80793cd5dc08b1689a4be5c7f82539738da3c98/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-20211203024124-532170 san=[192.168.59.203 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-20211203024124-532170]
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
44ad2903327a4 6e38f40d628db 22 seconds ago Running storage-provisioner 0 9d44abe734bc7
fa412d0bc64fe 8d147537fb7d1 40 seconds ago Running coredns 0 989b0a20cd6bf
9fe05a46928d7 6de166512aa22 52 seconds ago Running kindnet-cni 0 52fd468fab925
b96a30f689aa8 edeff87e48029 53 seconds ago Running kube-proxy 0 1150dc7540b90
da77c4c16ea38 0ce02f92d3e43 About a minute ago Running kube-controller-manager 0 75715f0308eb1
377954dfce779 721ba97f54a65 About a minute ago Running kube-scheduler 0 4b59fc65934a4
c6bb5925aa26b 0048118155842 About a minute ago Running etcd 0 26442a09720c6
7861cc2da74e1 8a5cc299272d9 About a minute ago Running kube-apiserver 0 1a438759f3a6e
*
* ==> containerd <==
* -- Logs begin at Fri 2021-12-03 02:41:27 UTC, end at Fri 2021-12-03 02:43:14 UTC. --
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.222703254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.222716851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.222726647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.222848435Z" level=warning msg="`default_runtime` is deprecated, please use `default_runtime_name` to reference the default configuration you have defined in `runtimes`"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.222920521Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:default DefaultRuntime:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} UntrustedWorkloadRuntime:{Type: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} Runtimes:map[default:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:<nil> PrivilegedWithoutHostDevices:false BaseRuntimeSpec:} runc:{Type:io.containerd.runc.v2 Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:0xc00041ffb0 PrivilegedWithoutHostDevices:false BaseRuntimeSpec:}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPlugin
ConfDir:/etc/cni/net.mk NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:k8s.gcr.io/pause:3.5 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true IgnoreImageDefinedVolumes:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.223014188Z" level=info msg="Connect containerd service"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.223064254Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.223824728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.223940610Z" level=info msg="Start subscribing containerd event"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.224031243Z" level=info msg="Start recovering state"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.224569581Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.224644393Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.224702180Z" level=info msg="containerd successfully booted in 0.040989s"
Dec 03 02:42:38 pause-20211203024124-532170 systemd[1]: Started containerd container runtime.
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.310686273Z" level=info msg="Start event monitor"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.310736093Z" level=info msg="Start snapshots syncer"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.310744594Z" level=info msg="Start cni network conf syncer"
Dec 03 02:42:38 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:38.310757514Z" level=info msg="Start streaming server"
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.156621095Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:9aa5302f-010c-4351-96c2-b2485180be47,Namespace:kube-system,Attempt:0,}"
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.186943591Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f pid=2529
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.361766697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:9aa5302f-010c-4351-96c2-b2485180be47,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f\""
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.365150740Z" level=info msg="CreateContainer within sandbox \"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.436686048Z" level=info msg="CreateContainer within sandbox \"9d44abe734bc79e2d4d48b83d30dedf4138f459cc1229518929db77d5d799c0f\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8\""
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.437875321Z" level=info msg="StartContainer for \"44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8\""
Dec 03 02:42:51 pause-20211203024124-532170 containerd[2242]: time="2021-12-03T02:42:51.586425386Z" level=info msg="StartContainer for \"44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8\" returns successfully"
*
* ==> coredns [fa412d0bc64fe41cabe75d75b154341ad2989d0d273aca5f583e5b450247fcaf] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
CoreDNS-1.8.4
linux/amd64, go1.16.4, 053c4d5
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [ +1.203764] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev veth344f8041
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff de 19 e6 42 b4 2c 08 06 .........B.,..
[Dec 3 02:35] cgroup: cgroup2: unknown option "nsdelegate"
[ +43.145422] cgroup: cgroup2: unknown option "nsdelegate"
[ +1.997001] IPv4: martian source 10.244.1.2 from 10.244.1.2, on dev vethae8775a3
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 09 60 d3 6c 6f 08 06 ........`.lo..
[Dec 3 02:36] cgroup: cgroup2: unknown option "nsdelegate"
[Dec 3 02:38] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethc07b7bb4
[ +0.000003] ll header: 00000000: ff ff ff ff ff ff 9a 08 d6 d0 27 5c 08 06 ..........'\..
[Dec 3 02:39] cgroup: cgroup2: unknown option "nsdelegate"
[Dec 3 02:40] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth5d5f7a44
[ +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e b4 3b e2 7b 9d 08 06 ........;.{...
[Dec 3 02:41] cgroup: cgroup2: unknown option "nsdelegate"
[ +14.204723] cgroup: cgroup2: unknown option "nsdelegate"
[ +1.025718] cgroup: cgroup2: unknown option "nsdelegate"
[ +0.991957] cgroup: cgroup2: unknown option "nsdelegate"
[ +19.678388] cgroup: cgroup2: unknown option "nsdelegate"
[ +2.060023] cgroup: cgroup2: unknown option "nsdelegate"
[Dec 3 02:42] cgroup: cgroup2: unknown option "nsdelegate"
[ +13.603429] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth170a53a8
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 4a 38 90 3b cc 08 06 .......J8.;...
[ +9.575101] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev vethe30be4e2
[ +0.000003] ll header: 00000000: ff ff ff ff ff ff aa eb 14 cd 69 e4 08 06 ..........i...
[ +9.369686] cgroup: cgroup2: unknown option "nsdelegate"
[Dec 3 02:43] cgroup: cgroup2: unknown option "nsdelegate"
*
* ==> etcd [c6bb5925aa26bb7f20b1c83f7c74ac39e07a5a726abc7531bf949e0549809c83] <==
* {"level":"warn","ts":"2021-12-03T02:42:54.631Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.361717606s","expected-duration":"1s"}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.854169973s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4890"}
{"level":"info","ts":"2021-12-03T02:42:54.852Z","caller":"traceutil/trace.go:171","msg":"trace[725459631] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:529; }","duration":"1.854256104s","start":"2021-12-03T02:42:52.998Z","end":"2021-12-03T02:42:54.852Z","steps":["trace[725459631] 'range keys from in-memory index tree' (duration: 1.854072249s)"],"step_count":1}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.582355422s","expected-duration":"100ms","prefix":"","request":"header:<ID:8128009408346100277 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" mod_revision:510 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" value_size:530 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" > >>","response":"size:16"}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:52.998Z","time spent":"1.854314416s","remote":"127.0.0.1:33866","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":4914,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
{"level":"info","ts":"2021-12-03T02:42:54.852Z","caller":"traceutil/trace.go:171","msg":"trace[615955715] linearizableReadLoop","detail":"{readStateIndex:548; appliedIndex:547; }","duration":"1.188717958s","start":"2021-12-03T02:42:53.664Z","end":"2021-12-03T02:42:54.852Z","steps":["trace[615955715] 'read index received' (duration: 968.013476ms)","trace[615955715] 'applied index is now lower than readState.Index' (duration: 220.703541ms)"],"step_count":2}
{"level":"info","ts":"2021-12-03T02:42:54.852Z","caller":"traceutil/trace.go:171","msg":"trace[1851729177] transaction","detail":"{read_only:false; response_revision:530; number_of_response:1; }","duration":"1.582656878s","start":"2021-12-03T02:42:53.270Z","end":"2021-12-03T02:42:54.852Z","steps":["trace[1851729177] 'compare' (duration: 1.582261694s)"],"step_count":1}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.613711605s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.188840443s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
{"level":"info","ts":"2021-12-03T02:42:54.852Z","caller":"traceutil/trace.go:171","msg":"trace[43474745] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:530; }","duration":"1.188875717s","start":"2021-12-03T02:42:53.664Z","end":"2021-12-03T02:42:54.852Z","steps":["trace[43474745] 'agreement among raft nodes before linearized reading' (duration: 1.188815061s)"],"step_count":1}
{"level":"info","ts":"2021-12-03T02:42:54.852Z","caller":"traceutil/trace.go:171","msg":"trace[1805756956] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:529; }","duration":"1.613993557s","start":"2021-12-03T02:42:53.238Z","end":"2021-12-03T02:42:54.852Z","steps":["trace[1805756956] 'range keys from in-memory index tree' (duration: 1.61359796s)"],"step_count":1}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:53.664Z","time spent":"1.188915825s","remote":"127.0.0.1:33864","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1153,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
{"level":"warn","ts":"2021-12-03T02:42:54.852Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:53.238Z","time spent":"1.614050289s","remote":"127.0.0.1:33934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2021-12-03T02:42:54.853Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:53.270Z","time spent":"1.582731298s","remote":"127.0.0.1:33918","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":598,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" mod_revision:510 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" value_size:530 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20211203024124-532170\" > >"}
{"level":"warn","ts":"2021-12-03T02:42:55.362Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128009408346100279,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-12-03T02:42:55.862Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128009408346100279,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-12-03T02:42:56.363Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128009408346100279,"retry-timeout":"500ms"}
{"level":"warn","ts":"2021-12-03T02:42:56.800Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.939127407s","expected-duration":"1s"}
{"level":"info","ts":"2021-12-03T02:42:56.800Z","caller":"traceutil/trace.go:171","msg":"trace[121078242] linearizableReadLoop","detail":"{readStateIndex:549; appliedIndex:549; }","duration":"1.939141286s","start":"2021-12-03T02:42:54.861Z","end":"2021-12-03T02:42:56.800Z","steps":["trace[121078242] 'read index received' (duration: 1.939133379s)","trace[121078242] 'applied index is now lower than readState.Index' (duration: 6.677µs)"],"step_count":2}
{"level":"warn","ts":"2021-12-03T02:42:56.815Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.953947612s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/kubernetes\" ","response":"range_response_count:1 size:706"}
{"level":"warn","ts":"2021-12-03T02:42:56.815Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.95289822s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2021-12-03T02:42:56.815Z","caller":"traceutil/trace.go:171","msg":"trace[634251822] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:531; }","duration":"1.953003147s","start":"2021-12-03T02:42:54.862Z","end":"2021-12-03T02:42:56.815Z","steps":["trace[634251822] 'agreement among raft nodes before linearized reading' (duration: 1.938259419s)"],"step_count":1}
{"level":"info","ts":"2021-12-03T02:42:56.815Z","caller":"traceutil/trace.go:171","msg":"trace[1182520940] range","detail":"{range_begin:/registry/services/specs/default/kubernetes; range_end:; response_count:1; response_revision:531; }","duration":"1.954019471s","start":"2021-12-03T02:42:54.861Z","end":"2021-12-03T02:42:56.815Z","steps":["trace[1182520940] 'agreement among raft nodes before linearized reading' (duration: 1.93925253s)","trace[1182520940] 'range keys from in-memory index tree' (duration: 14.649038ms)"],"step_count":2}
{"level":"warn","ts":"2021-12-03T02:42:56.815Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:54.862Z","time spent":"1.953050451s","remote":"127.0.0.1:33934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
{"level":"warn","ts":"2021-12-03T02:42:56.815Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-03T02:42:54.861Z","time spent":"1.954077543s","remote":"127.0.0.1:33880","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":730,"request content":"key:\"/registry/services/specs/default/kubernetes\" "}
*
* ==> kernel <==
* 02:43:24 up 4:26, 0 users, load average: 8.20, 4.08, 2.19
Linux pause-20211203024124-532170 4.9.0-16-amd64 #1 SMP Debian 4.9.272-2 (2021-07-19) x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [7861cc2da74e14be1efce89a227b8a087e1a4ad74eea5a57fd9979faa37223b4] <==
* Trace[1329700100]: ---"About to write a response" 7966ms (02:42:32.807)
Trace[1329700100]: [7.966737646s] [7.966737646s] END
I1203 02:42:32.808554 1 trace.go:205] Trace[10124701]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20211203024124-532170,user-agent:kubelet/v1.22.4 (linux/amd64) kubernetes/b695d79,audit-id:03c422dd-754f-491f-8b0f-8cda1b614985,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (03-Dec-2021 02:42:30.264) (total time: 2543ms):
Trace[10124701]: ---"About to write a response" 2537ms (02:42:32.801)
Trace[10124701]: [2.543694862s] [2.543694862s] END
I1203 02:42:32.811479 1 trace.go:205] Trace[1566228004]: "GuaranteedUpdate etcd3" type:*coordination.Lease (03-Dec-2021 02:42:31.447) (total time: 1363ms):
Trace[1566228004]: ---"Transaction committed" 1363ms (02:42:32.811)
Trace[1566228004]: [1.363871215s] [1.363871215s] END
I1203 02:42:32.811607 1 trace.go:205] Trace[659618151]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20211203024124-532170,user-agent:kubelet/v1.22.4 (linux/amd64) kubernetes/b695d79,audit-id:7f82b614-c0f3-45b7-a7e7-fd4d532ee2df,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (03-Dec-2021 02:42:31.447) (total time: 1364ms):
Trace[659618151]: ---"Object stored in database" 1363ms (02:42:32.811)
Trace[659618151]: [1.364142405s] [1.364142405s] END
I1203 02:42:54.853636 1 trace.go:205] Trace[921726475]: "GuaranteedUpdate etcd3" type:*coordination.Lease (03-Dec-2021 02:42:53.269) (total time: 1584ms):
Trace[921726475]: ---"Transaction committed" 1583ms (02:42:54.853)
Trace[921726475]: [1.584390803s] [1.584390803s] END
I1203 02:42:54.853975 1 trace.go:205] Trace[719452703]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20211203024124-532170,user-agent:kubelet/v1.22.4 (linux/amd64) kubernetes/b695d79,audit-id:2d516ca3-d159-452f-a840-f129cf10d146,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (03-Dec-2021 02:42:53.269) (total time: 1584ms):
Trace[719452703]: ---"Object stored in database" 1584ms (02:42:54.853)
Trace[719452703]: [1.584903164s] [1.584903164s] END
I1203 02:42:54.854055 1 trace.go:205] Trace[1850500647]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:4359df40-5daa-4a50-bd70-f48390ea66ec,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (03-Dec-2021 02:42:53.663) (total time: 1190ms):
Trace[1850500647]: ---"About to write a response" 1190ms (02:42:54.853)
Trace[1850500647]: [1.190501383s] [1.190501383s] END
I1203 02:42:54.857513 1 trace.go:205] Trace[746149164]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (03-Dec-2021 02:42:52.997) (total time: 1859ms):
Trace[746149164]: [1.859677903s] [1.859677903s] END
I1203 02:42:54.858039 1 trace.go:205] Trace[100308926]: "List" url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:9ee7eb23-38af-4c85-8170-deb854735830,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (03-Dec-2021 02:42:52.997) (total time: 1860ms):
Trace[100308926]: ---"Listing from storage done" 1859ms (02:42:54.857)
Trace[100308926]: [1.860215353s] [1.860215353s] END
*
* ==> kube-controller-manager [da77c4c16ea386dda39fcf91ff000cf63eb9f9b8cef0af731ccbbe2fe9f197dd] <==
* I1203 02:42:19.598296 1 shared_informer.go:247] Caches are synced for crt configmap
I1203 02:42:19.598317 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I1203 02:42:19.598432 1 shared_informer.go:247] Caches are synced for resource quota
I1203 02:42:19.598469 1 shared_informer.go:247] Caches are synced for PVC protection
I1203 02:42:19.598487 1 shared_informer.go:247] Caches are synced for deployment
I1203 02:42:19.598941 1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20211203024124-532170" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1203 02:42:19.605085 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20211203024124-532170" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1203 02:42:19.623094 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-pause-20211203024124-532170" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1203 02:42:19.629084 1 shared_informer.go:247] Caches are synced for persistent volume
I1203 02:42:19.638354 1 shared_informer.go:247] Caches are synced for namespace
I1203 02:42:19.641965 1 shared_informer.go:247] Caches are synced for service account
I1203 02:42:19.697287 1 shared_informer.go:247] Caches are synced for attach detach
I1203 02:42:19.698168 1 shared_informer.go:247] Caches are synced for PV protection
I1203 02:42:19.698204 1 shared_informer.go:247] Caches are synced for expand
I1203 02:42:19.723295 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5xmj2"
I1203 02:42:19.723336 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n6hkm"
I1203 02:42:19.723690 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
I1203 02:42:19.800345 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-4z8xc"
I1203 02:42:19.829969 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-qwmpd"
I1203 02:42:20.016290 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
I1203 02:42:20.026075 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-4z8xc"
I1203 02:42:20.097398 1 shared_informer.go:247] Caches are synced for garbage collector
I1203 02:42:20.177064 1 shared_informer.go:247] Caches are synced for garbage collector
I1203 02:42:20.177104 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1203 02:42:34.546360 1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
*
* ==> kube-proxy [b96a30f689aa838a728994e19ad57f3194b434b5136e1ecab8a96d391bc05eaf] <==
* I1203 02:42:20.634200 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I1203 02:42:20.634302 1 server_others.go:140] Detected node IP 192.168.49.2
W1203 02:42:20.634321 1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
I1203 02:42:21.018258 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I1203 02:42:21.018308 1 server_others.go:212] Using iptables Proxier.
I1203 02:42:21.018322 1 server_others.go:219] creating dualStackProxier for iptables.
W1203 02:42:21.018342 1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I1203 02:42:21.018662 1 server.go:649] Version: v1.22.4
I1203 02:42:21.020100 1 config.go:224] Starting endpoint slice config controller
I1203 02:42:21.020120 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1203 02:42:21.020193 1 config.go:315] Starting service config controller
I1203 02:42:21.020198 1 shared_informer.go:240] Waiting for caches to sync for service config
I1203 02:42:21.121119 1 shared_informer.go:247] Caches are synced for service config
I1203 02:42:21.121195 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [377954dfce779fd5854219f915357e582012617cb03c0a70c60b9987499185ac] <==
* W1203 02:42:03.294201 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1203 02:42:03.406171 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I1203 02:42:03.406300 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1203 02:42:03.406324 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1203 02:42:03.406344 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E1203 02:42:03.407679 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1203 02:42:03.415896 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1203 02:42:03.416427 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E1203 02:42:03.418623 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1203 02:42:03.421833 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1203 02:42:03.422567 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1203 02:42:03.422726 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1203 02:42:03.422862 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1203 02:42:03.423000 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1203 02:42:03.423347 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1203 02:42:03.423470 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E1203 02:42:03.423544 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1203 02:42:03.423558 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1203 02:42:03.423644 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E1203 02:42:03.424827 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1203 02:42:04.333813 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1203 02:42:04.364124 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1203 02:42:04.365071 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1203 02:42:04.472526 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
I1203 02:42:04.906826 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Fri 2021-12-03 02:41:27 UTC, end at Fri 2021-12-03 02:43:24 UTC. --
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.759016 1250 topology_manager.go:200] "Topology Admit Handler"
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.769175 1250 topology_manager.go:200] "Topology Admit Handler"
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862569 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/368971a0-2c0b-44f2-b66d-3257c9f080f4-cni-cfg\") pod \"kindnet-n6hkm\" (UID: \"368971a0-2c0b-44f2-b66d-3257c9f080f4\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862653 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b4f2238-af1f-487f-80ff-a0a932437cee-lib-modules\") pod \"kube-proxy-5xmj2\" (UID: \"9b4f2238-af1f-487f-80ff-a0a932437cee\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862702 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368971a0-2c0b-44f2-b66d-3257c9f080f4-xtables-lock\") pod \"kindnet-n6hkm\" (UID: \"368971a0-2c0b-44f2-b66d-3257c9f080f4\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862735 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368971a0-2c0b-44f2-b66d-3257c9f080f4-lib-modules\") pod \"kindnet-n6hkm\" (UID: \"368971a0-2c0b-44f2-b66d-3257c9f080f4\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862769 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b4f2238-af1f-487f-80ff-a0a932437cee-xtables-lock\") pod \"kube-proxy-5xmj2\" (UID: \"9b4f2238-af1f-487f-80ff-a0a932437cee\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862848 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b4f2238-af1f-487f-80ff-a0a932437cee-kube-proxy\") pod \"kube-proxy-5xmj2\" (UID: \"9b4f2238-af1f-487f-80ff-a0a932437cee\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862905 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fj6q\" (UniqueName: \"kubernetes.io/projected/368971a0-2c0b-44f2-b66d-3257c9f080f4-kube-api-access-6fj6q\") pod \"kindnet-n6hkm\" (UID: \"368971a0-2c0b-44f2-b66d-3257c9f080f4\") "
Dec 03 02:42:19 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:19.862938 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjt4z\" (UniqueName: \"kubernetes.io/projected/9b4f2238-af1f-487f-80ff-a0a932437cee-kube-api-access-cjt4z\") pod \"kube-proxy-5xmj2\" (UID: \"9b4f2238-af1f-487f-80ff-a0a932437cee\") "
Dec 03 02:42:21 pause-20211203024124-532170 kubelet[1250]: E1203 02:42:21.617953 1250 kubelet.go:2337] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 03 02:42:32 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:32.825477 1250 topology_manager.go:200] "Topology Admit Handler"
Dec 03 02:42:32 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:32.961188 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f97c64e-1e68-4356-a1b5-f77f7cc81b24-config-volume\") pod \"coredns-78fcd69978-qwmpd\" (UID: \"7f97c64e-1e68-4356-a1b5-f77f7cc81b24\") "
Dec 03 02:42:32 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:32.961235 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt727\" (UniqueName: \"kubernetes.io/projected/7f97c64e-1e68-4356-a1b5-f77f7cc81b24-kube-api-access-wt727\") pod \"coredns-78fcd69978-qwmpd\" (UID: \"7f97c64e-1e68-4356-a1b5-f77f7cc81b24\") "
Dec 03 02:42:38 pause-20211203024124-532170 kubelet[1250]: W1203 02:42:38.155456 1250 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock /run/containerd/containerd.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
Dec 03 02:42:38 pause-20211203024124-532170 kubelet[1250]: W1203 02:42:38.155462 1250 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/run/containerd/containerd.sock /run/containerd/containerd.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
Dec 03 02:42:38 pause-20211203024124-532170 kubelet[1250]: E1203 02:42:38.524995 1250 remote_runtime.go:207] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\"" filter="nil"
Dec 03 02:42:38 pause-20211203024124-532170 kubelet[1250]: E1203 02:42:38.525058 1250 kuberuntime_sandbox.go:281] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
Dec 03 02:42:38 pause-20211203024124-532170 kubelet[1250]: E1203 02:42:38.525083 1250 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
Dec 03 02:42:50 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:50.840523 1250 topology_manager.go:200] "Topology Admit Handler"
Dec 03 02:42:50 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:50.997439 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9aa5302f-010c-4351-96c2-b2485180be47-tmp\") pod \"storage-provisioner\" (UID: \"9aa5302f-010c-4351-96c2-b2485180be47\") "
Dec 03 02:42:50 pause-20211203024124-532170 kubelet[1250]: I1203 02:42:50.997691 1250 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfl2n\" (UniqueName: \"kubernetes.io/projected/9aa5302f-010c-4351-96c2-b2485180be47-kube-api-access-cfl2n\") pod \"storage-provisioner\" (UID: \"9aa5302f-010c-4351-96c2-b2485180be47\") "
Dec 03 02:42:54 pause-20211203024124-532170 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Dec 03 02:42:54 pause-20211203024124-532170 systemd[1]: kubelet.service: Succeeded.
Dec 03 02:42:54 pause-20211203024124-532170 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
*
* ==> storage-provisioner [44ad2903327a4831a287d02d9bf015c7c79184800d7de792ec88dd259c43c6f8] <==
* I1203 02:42:51.601493 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1203 02:42:51.615295 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1203 02:42:51.615709 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1203 02:42:51.641608 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1203 02:42:51.641868 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20211203024124-532170_e895639e-54d7-41f8-ba55-6be5b703777d!
I1203 02:42:51.643094 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbc908e7-bac3-41f4-ad8a-28a0acbab933", APIVersion:"v1", ResourceVersion:"527", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20211203024124-532170_e895639e-54d7-41f8-ba55-6be5b703777d became leader
I1203 02:42:51.742125 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20211203024124-532170_e895639e-54d7-41f8-ba55-6be5b703777d!
-- /stdout --
** stderr **
E1203 02:43:24.169778 671866 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20211203024124-532170 -n pause-20211203024124-532170
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20211203024124-532170 -n pause-20211203024124-532170: exit status 2 (686.054671ms)
-- stdout --
Paused
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:257: "pause-20211203024124-532170" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestPause/serial/Pause (33.33s)