Test Report: KVM_Linux 9698

                    
                      72ae9c24a6567fed6f66704b6e0b773ea4700fb6
                    
                

Test fail (3/172)

failed test Duration
TestErrorSpam 88.77
TestScheduledStop 114.4
TestStartStop/group/crio/serial/VerifyKubernetesImages 6.2
x
+
TestErrorSpam (88.77s)

                                                
                                                
=== RUN   TestErrorSpam
=== PAUSE TestErrorSpam

                                                
                                                

                                                
                                                
=== CONT  TestErrorSpam
error_spam_test.go:62: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20201113231417-7409 -n=1 --memory=2250 --wait=false --driver=kvm2 

                                                
                                                
=== CONT  TestErrorSpam
error_spam_test.go:62: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20201113231417-7409 -n=1 --memory=2250 --wait=false --driver=kvm2 : (1m23.298104544s)
error_spam_test.go:77: unexpected stderr: "! Unable to update kvm2 driver: download: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.15.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.15.0/docker-machine-driver-kvm2.sha256 Dst:/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube/bin/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8] Decompressors:map[bz2:0x2ac93c8 gz:0x2ac93c8 tar.bz2:0x2ac93c8 tar.gz:0x2ac93c8 tar.xz:0x2ac93c8 tbz2:0x2ac93c8 tgz:0x2ac93c8 txz:0x2ac93c8 xz:0x2ac93c8 zip:0x2ac93c8] Getters:map[file:0xc0004995b0 http:0xc0004647e0 https:0xc000464800] Dir:false ProgressListener:0x2a8d3c0 Options:[0xcd96e0]}: invalid checksum: Error downloading checksum file: bad response code: 404"
error_spam_test.go:91: minikube stdout:
* [nospam-20201113231417-7409] minikube v1.15.0 on Debian 9.13
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/kubeconfig
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube
- MINIKUBE_LOCATION=9698
* Using the kvm2 driver based on user configuration
* Downloading driver docker-machine-driver-kvm2:
* Starting control plane node nospam-20201113231417-7409 in cluster nospam-20201113231417-7409
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-20201113231417-7409" cluster and "default" namespace by default
error_spam_test.go:92: minikube stderr:
! Unable to update kvm2 driver: download: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.15.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.15.0/docker-machine-driver-kvm2.sha256 Dst:/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube/bin/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8] Decompressors:map[bz2:0x2ac93c8 gz:0x2ac93c8 tar.bz2:0x2ac93c8 tar.gz:0x2ac93c8 tar.xz:0x2ac93c8 tbz2:0x2ac93c8 tgz:0x2ac93c8 txz:0x2ac93c8 xz:0x2ac93c8 zip:0x2ac93c8] Getters:map[file:0xc0004995b0 http:0xc0004647e0 https:0xc000464800] Dir:false ProgressListener:0x2a8d3c0 Options:[0xcd96e0]}: invalid checksum: Error downloading checksum file: bad response code: 404
error_spam_test.go:94: *** TestErrorSpam FAILED at 2020-11-13 23:15:40.467310404 +0000 UTC m=+1845.400695659
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p nospam-20201113231417-7409 -n nospam-20201113231417-7409

                                                
                                                
=== CONT  TestErrorSpam
helpers_test.go:238: <<< TestErrorSpam FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestErrorSpam]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20201113231417-7409 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p nospam-20201113231417-7409 logs -n 25: (2.853965439s)
helpers_test.go:246: TestErrorSpam logs: 
-- stdout --
	* ==> Docker <==
	* -- Logs begin at Fri 2020-11-13 23:14:32 UTC, end at Fri 2020-11-13 23:15:42 UTC. --
	* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.481211381Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618172724Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618349496Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618463600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618480886Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618495000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618509257Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.619069990Z" level=info msg="Loading containers: start."
	* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.930508466Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	* Nov 13 23:14:57 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:57.049560583Z" level=info msg="Loading containers: done."
	* Nov 13 23:14:57 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:57.092188344Z" level=info msg="Docker daemon" commit=4484c46 graphdriver(s)=overlay2 version=19.03.13
	* Nov 13 23:14:57 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:57.093269348Z" level=info msg="Daemon has completed initialization"
	* Nov 13 23:14:57 nospam-20201113231417-7409 systemd[1]: Started Docker Application Container Engine.
	* Nov 13 23:14:57 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:57.126120126Z" level=info msg="API listen on /var/run/docker.sock"
	* Nov 13 23:14:57 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:57.132121223Z" level=info msg="API listen on [::]:2376"
	* Nov 13 23:15:13 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:13.169179598Z" level=info msg="shim containerd-shim started" address=/containerd-shim/7ae2c145f733493a8f8313b396588f3c0fc6e97942c4c5217b6e0c5a3764fbf5.sock debug=false pid=3203
	* Nov 13 23:15:13 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:13.178795893Z" level=info msg="shim containerd-shim started" address=/containerd-shim/c2b95fdf0b7fa608ac096b551a14f575fc027e9bef8c666cb67a86856798d00a.sock debug=false pid=3208
	* Nov 13 23:15:13 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:13.192600842Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a0b5effd9ad5455a1147b29b8254ac61a6920136567895b001d73a9661c27393.sock debug=false pid=3223
	* Nov 13 23:15:13 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:13.234005383Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6ed9aa07c4c409f8656f0885012b297e6526d028242e2f067e4905d0a55e50c2.sock debug=false pid=3241
	* Nov 13 23:15:14 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:14.954245779Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1d4033cf40491146328f0fbd1e7f65208ef73ba2107004fcb89d720d61011305.sock debug=false pid=3368
	* Nov 13 23:15:15 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:15.017987633Z" level=info msg="shim containerd-shim started" address=/containerd-shim/06776ea32fc85206f2d67be754730eda2baacab5b70e67525fe1ec0c3a9bc48d.sock debug=false pid=3392
	* Nov 13 23:15:15 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:15.046811182Z" level=info msg="shim containerd-shim started" address=/containerd-shim/0ea21170c2c27edb0ebb70e389d0b842f91abd4fef960a8a5e0c4dc47ca3e4e7.sock debug=false pid=3405
	* Nov 13 23:15:15 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:15.060214842Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e51d85317c948a2bd46827f686c8bdfc933025905884113b3af8cc8365ff92c4.sock debug=false pid=3411
	* Nov 13 23:15:39 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:39.707088708Z" level=info msg="shim containerd-shim started" address=/containerd-shim/fa65780bf6719f7a366709db8706ec99895d36b1dd458160caee31d718972fc5.sock debug=false pid=4151
	* Nov 13 23:15:40 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:40.965920991Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e3c0577cc68f43d2fd2c3425280e88b4bd4aed7ae064f45eeefee46202e829bb.sock debug=false pid=4210
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	* 80c95993ba3fe       635b36f4d89f0       2 seconds ago       Running             kube-proxy                0                   537e295d8dba5
	* 1ff3439160c79       0369cf4303ffd       28 seconds ago      Running             etcd                      0                   395edc111b799
	* 17cd449abd5f5       4830ab6185860       28 seconds ago      Running             kube-controller-manager   0                   3c808a58df490
	* 7f3cd02c17792       14cd22f7abe78       28 seconds ago      Running             kube-scheduler            0                   d2f24a972219e
	* a39cea87e3ea6       b15c6247777d7       28 seconds ago      Running             kube-apiserver            0                   8a82ec56f46a5
	* 
	* ==> describe nodes <==
	* Name:               nospam-20201113231417-7409
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=nospam-20201113231417-7409
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=f1624ef53a2521d2c375e24d59fe2d2c53b4ded0
	*                     minikube.k8s.io/name=nospam-20201113231417-7409
	*                     minikube.k8s.io/updated_at=2020_11_13T23_15_33_0700
	*                     minikube.k8s.io/version=v1.15.0
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Fri, 13 Nov 2020 23:15:26 +0000
	* Taints:             node.kubernetes.io/not-ready:NoSchedule
	* Unschedulable:      false
	* Lease:
	*   HolderIdentity:  nospam-20201113231417-7409
	*   AcquireTime:     <unset>
	*   RenewTime:       Fri, 13 Nov 2020 23:15:34 +0000
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Fri, 13 Nov 2020 23:15:35 +0000   Fri, 13 Nov 2020 23:15:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Fri, 13 Nov 2020 23:15:35 +0000   Fri, 13 Nov 2020 23:15:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Fri, 13 Nov 2020 23:15:35 +0000   Fri, 13 Nov 2020 23:15:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            False   Fri, 13 Nov 2020 23:15:35 +0000   Fri, 13 Nov 2020 23:15:18 +0000   KubeletNotReady              container runtime status check may not have completed yet
	* Addresses:
	*   InternalIP:  192.168.39.66
	*   Hostname:    nospam-20201113231417-7409
	* Capacity:
	*   cpu:                2
	*   ephemeral-storage:  16954224Ki
	*   hugepages-2Mi:      0
	*   memory:             2133072Ki
	*   pods:               110
	* Allocatable:
	*   cpu:                2
	*   ephemeral-storage:  16954224Ki
	*   hugepages-2Mi:      0
	*   memory:             2133072Ki
	*   pods:               110
	* System Info:
	*   Machine ID:                 fa00240fb79643e7be3cff40db3720b8
	*   System UUID:                fa00240f-b796-43e7-be3c-ff40db3720b8
	*   Boot ID:                    fe35aa2d-a5d6-4e4c-a4bd-c2ce2e55438d
	*   Kernel Version:             4.19.150
	*   OS Image:                   Buildroot 2020.02.7
	*   Operating System:           linux
	*   Architecture:               amd64
	*   Container Runtime Version:  docker://19.3.13
	*   Kubelet Version:            v1.19.4
	*   Kube-Proxy Version:         v1.19.4
	* Non-terminated Pods:          (5 in total)
	*   Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	*   kube-system                 etcd-nospam-20201113231417-7409                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	*   kube-system                 kube-apiserver-nospam-20201113231417-7409             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6s
	*   kube-system                 kube-controller-manager-nospam-20201113231417-7409    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6s
	*   kube-system                 kube-proxy-ksrvl                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	*   kube-system                 kube-scheduler-nospam-20201113231417-7409             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                550m (27%)  0 (0%)
	*   memory             0 (0%)      0 (0%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	*   hugepages-2Mi      0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                From        Message
	*   ----    ------                   ----               ----        -------
	*   Normal  NodeHasSufficientMemory  30s (x8 over 31s)  kubelet     Node nospam-20201113231417-7409 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    30s (x7 over 31s)  kubelet     Node nospam-20201113231417-7409 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     30s (x7 over 31s)  kubelet     Node nospam-20201113231417-7409 status is now: NodeHasSufficientPID
	*   Normal  Starting                 8s                 kubelet     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  7s                 kubelet     Node nospam-20201113231417-7409 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    7s                 kubelet     Node nospam-20201113231417-7409 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     7s                 kubelet     Node nospam-20201113231417-7409 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  6s                 kubelet     Updated Node Allocatable limit across pods
	*   Normal  Starting                 1s                 kube-proxy  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [Nov13 23:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	* [  +0.148145] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	* [  +4.849304] Unstable clock detected, switching default tracing clock to "global"
	*               If you want to keep using the local clock, then add:
	*                 "trace_clock=local"
	*               on the kernel command line
	* [  +0.000319] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	* [  +4.186958] systemd-fstab-generator[1155]: Ignoring "noauto" for root device
	* [  +0.078704] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	* [  +0.000005] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	* [  +1.536016] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1711 comm=systemd-network
	* [  +1.922260] vboxguest: loading out-of-tree module taints kernel.
	* [  +0.008181] vboxguest: PCI device not found, probably running on physical hardware.
	* [  +1.395542] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	* [  +3.321535] systemd-fstab-generator[2071]: Ignoring "noauto" for root device
	* [  +0.250924] systemd-fstab-generator[2084]: Ignoring "noauto" for root device
	* [ +14.415162] systemd-fstab-generator[2317]: Ignoring "noauto" for root device
	* [  +2.557742] kauditd_printk_skb: 68 callbacks suppressed
	* [  +0.673720] systemd-fstab-generator[2484]: Ignoring "noauto" for root device
	* [Nov13 23:15] systemd-fstab-generator[2735]: Ignoring "noauto" for root device
	* [  +2.118673] kauditd_printk_skb: 107 callbacks suppressed
	* [ +20.776067] systemd-fstab-generator[3854]: Ignoring "noauto" for root device
	* [  +9.868908] kauditd_printk_skb: 38 callbacks suppressed
	* 
	* ==> etcd [1ff3439160c7] <==
	* 2020-11-13 23:15:19.821222 I | embed: listening for metrics on http://127.0.0.1:2381
	* 2020-11-13 23:15:19.822338 I | etcdserver: 4805191378a47515 as single-node; fast-forwarding 9 ticks (election ticks 10)
	* 2020-11-13 23:15:19.823578 I | embed: listening for peers on 192.168.39.66:2380
	* raft2020/11/13 23:15:19 INFO: 4805191378a47515 switched to configuration voters=(5189581717033481493)
	* 2020-11-13 23:15:19.825544 I | etcdserver/membership: added member 4805191378a47515 [https://192.168.39.66:2380] to cluster e2b8cbf3a588a126
	* raft2020/11/13 23:15:20 INFO: 4805191378a47515 is starting a new election at term 1
	* raft2020/11/13 23:15:20 INFO: 4805191378a47515 became candidate at term 2
	* raft2020/11/13 23:15:20 INFO: 4805191378a47515 received MsgVoteResp from 4805191378a47515 at term 2
	* raft2020/11/13 23:15:20 INFO: 4805191378a47515 became leader at term 2
	* raft2020/11/13 23:15:20 INFO: raft.node: 4805191378a47515 elected leader 4805191378a47515 at term 2
	* 2020-11-13 23:15:20.168528 I | etcdserver: setting up the initial cluster version to 3.4
	* 2020-11-13 23:15:20.183997 N | etcdserver/membership: set the initial cluster version to 3.4
	* 2020-11-13 23:15:20.184621 I | etcdserver/api: enabled capabilities for version 3.4
	* 2020-11-13 23:15:20.185066 I | etcdserver: published {Name:nospam-20201113231417-7409 ClientURLs:[https://192.168.39.66:2379]} to cluster e2b8cbf3a588a126
	* 2020-11-13 23:15:20.185502 I | embed: ready to serve client requests
	* 2020-11-13 23:15:20.186630 I | embed: ready to serve client requests
	* 2020-11-13 23:15:20.189773 I | embed: serving client requests on 192.168.39.66:2379
	* 2020-11-13 23:15:20.189957 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-13 23:15:27.079920 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	* 2020-11-13 23:15:27.082947 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:0 size:4" took too long (199.071742ms) to execute
	* 2020-11-13 23:15:27.084113 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-nospam-20201113231417-7409\" " with result "range_response_count:0 size:4" took too long (205.348852ms) to execute
	* 2020-11-13 23:15:27.085095 W | etcdserver: read-only range request "key:\"/registry/csinodes/nospam-20201113231417-7409\" " with result "range_response_count:0 size:4" took too long (208.386882ms) to execute
	* 2020-11-13 23:15:27.121497 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/nospam-20201113231417-7409\" " with result "range_response_count:0 size:4" took too long (235.210385ms) to execute
	* 2020-11-13 23:15:27.122335 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (236.407824ms) to execute
	* 2020-11-13 23:15:27.123004 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (237.134886ms) to execute
	* 
	* ==> kernel <==
	*  23:15:43 up 1 min,  0 users,  load average: 3.36, 0.91, 0.31
	* Linux nospam-20201113231417-7409 4.19.150 #1 SMP Fri Nov 6 15:58:07 PST 2020 x86_64 GNU/Linux
	* PRETTY_NAME="Buildroot 2020.02.7"
	* 
	* ==> kube-apiserver [a39cea87e3ea] <==
	* I1113 23:15:26.461200       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	* I1113 23:15:26.461510       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	* I1113 23:15:26.461768       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	* E1113 23:15:26.620486       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.66, ResourceVersion: 0, AdditionalErrorMsg: 
	* I1113 23:15:26.747505       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	* I1113 23:15:26.756749       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1113 23:15:26.777686       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	* I1113 23:15:26.842105       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1113 23:15:26.851948       1 cache.go:39] Caches are synced for autoregister controller
	* I1113 23:15:27.435196       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I1113 23:15:27.435485       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I1113 23:15:27.503268       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	* I1113 23:15:27.561789       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	* I1113 23:15:27.562344       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	* I1113 23:15:29.476202       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I1113 23:15:29.664897       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* W1113 23:15:30.027247       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.66]
	* I1113 23:15:30.032191       1 controller.go:606] quota admission added evaluator for: endpoints
	* I1113 23:15:30.064288       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	* I1113 23:15:31.135301       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* I1113 23:15:32.825790       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I1113 23:15:33.056179       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1113 23:15:34.778117       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* I1113 23:15:38.516770       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	* I1113 23:15:38.627889       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	* 
	* ==> kube-controller-manager [17cd449abd5f] <==
	* I1113 23:15:38.327243       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	* I1113 23:15:38.328494       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	* I1113 23:15:38.333819       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	* I1113 23:15:38.337949       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	* I1113 23:15:38.429224       1 shared_informer.go:247] Caches are synced for disruption 
	* I1113 23:15:38.429269       1 disruption.go:339] Sending events to api server.
	* I1113 23:15:38.481246       1 shared_informer.go:247] Caches are synced for deployment 
	* I1113 23:15:38.482086       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1113 23:15:38.483287       1 shared_informer.go:247] Caches are synced for daemon sets 
	* I1113 23:15:38.534192       1 shared_informer.go:247] Caches are synced for taint 
	* I1113 23:15:38.537038       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	* W1113 23:15:38.538187       1 node_lifecycle_controller.go:1044] Missing timestamp for Node nospam-20201113231417-7409. Assuming now as a timestamp.
	* I1113 23:15:38.538614       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	* I1113 23:15:38.544169       1 taint_manager.go:187] Starting NoExecuteTaintManager
	* I1113 23:15:38.552049       1 shared_informer.go:247] Caches are synced for resource quota 
	* I1113 23:15:38.575892       1 event.go:291] "Event occurred" object="nospam-20201113231417-7409" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node nospam-20201113231417-7409 event: Registered Node nospam-20201113231417-7409 in Controller"
	* I1113 23:15:38.593764       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	* I1113 23:15:38.625085       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1"
	* I1113 23:15:38.794084       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1113 23:15:38.795152       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1113 23:15:38.794983       1 shared_informer.go:247] Caches are synced for garbage collector 
	* I1113 23:15:38.866342       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ksrvl"
	* I1113 23:15:38.880580       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-ng68r"
	* E1113 23:15:39.043766       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"37e1abbb-2de0-4393-9742-508a981aa8e2", ResourceVersion:"237", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740906133, loc:(*time.Location)(0x6a61c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001be3640), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0xc001be3660)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001be3680), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)
(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001bf4d80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v
1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001be36a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersi
stentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001be36c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.
DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.4", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"",
ValueFrom:(*v1.EnvVarSource)(0xc001be3700)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001afdd40), Stdin:false, StdinOnce:false, TTY:false}},
EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c849f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00041f5e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConf
ig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00055ba78)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001c84a48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	* E1113 23:15:39.260343       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"37e1abbb-2de0-4393-9742-508a981aa8e2", ResourceVersion:"352", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740906133, loc:(*time.Location)(0x6a61c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000c22000), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0xc000c22020)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000c22040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000c22060)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000c22080), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistent
DiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001b02340), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.S
caleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000c220a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolum
eSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000c220c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBD
VolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.4", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf
", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000c22100)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMes
sagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001afc420), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0002458b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000f7dc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operat
or:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001c2e618)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000245978)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object h
as been modified; please apply your changes to the latest version and try again
	* 
	* ==> kube-proxy [80c95993ba3f] <==
	* I1113 23:15:41.525265       1 node.go:136] Successfully retrieved node IP: 192.168.39.66
	* I1113 23:15:41.525652       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.39.66), assume IPv4 operation
	* W1113 23:15:41.752595       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
	* I1113 23:15:41.752872       1 server_others.go:186] Using iptables Proxier.
	* W1113 23:15:41.752967       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
	* I1113 23:15:41.753074       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
	* I1113 23:15:41.754136       1 server.go:650] Version: v1.19.4
	* I1113 23:15:41.755571       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
	* I1113 23:15:41.755859       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I1113 23:15:41.756614       1 conntrack.go:83] Setting conntrack hashsize to 32768
	* I1113 23:15:41.761729       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1113 23:15:41.761799       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1113 23:15:41.763752       1 config.go:315] Starting service config controller
	* I1113 23:15:41.763865       1 shared_informer.go:240] Waiting for caches to sync for service config
	* I1113 23:15:41.764009       1 config.go:224] Starting endpoint slice config controller
	* I1113 23:15:41.764018       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	* I1113 23:15:41.864503       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	* I1113 23:15:41.864709       1 shared_informer.go:247] Caches are synced for service config 
	* 
	* ==> kube-scheduler [7f3cd02c1779] <==
	* E1113 23:15:26.854689       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1113 23:15:26.854798       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1113 23:15:26.854918       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1113 23:15:26.855020       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1113 23:15:26.855101       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1113 23:15:26.855217       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1113 23:15:26.855329       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1113 23:15:26.859571       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1113 23:15:26.867896       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1113 23:15:26.868080       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1113 23:15:26.868967       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1113 23:15:27.799223       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1113 23:15:27.809293       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1113 23:15:27.870095       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1113 23:15:27.878614       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* E1113 23:15:27.921326       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1113 23:15:28.043648       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1113 23:15:28.072321       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1113 23:15:28.118329       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1113 23:15:28.121293       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1113 23:15:28.141069       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	* E1113 23:15:28.147313       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1113 23:15:28.245464       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	* E1113 23:15:28.345647       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* I1113 23:15:31.408985       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2020-11-13 23:14:32 UTC, end at Fri 2020-11-13 23:15:43 UTC. --
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.513972    3869 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.513993    3869 policy_none.go:43] [cpumanager] none policy: Start
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.540586    3869 plugin_manager.go:114] Starting Kubelet Plugin Manager
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.558750    3869 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.670054    3869 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.696286    3869 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.709902    3869 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.712716    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/8669de9954d4bfb59c4e5b26ffd421d9-etcd-certs") pod "etcd-nospam-20201113231417-7409" (UID: "8669de9954d4bfb59c4e5b26ffd421d9")
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.712904    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/8669de9954d4bfb59c4e5b26ffd421d9-etcd-data") pod "etcd-nospam-20201113231417-7409" (UID: "8669de9954d4bfb59c4e5b26ffd421d9")
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.814769    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/6cb144f7d82285562d6fc7ed0aeee754-k8s-certs") pod "kube-controller-manager-nospam-20201113231417-7409" (UID: "6cb144f7d82285562d6fc7ed0aeee754")
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.815202    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6cb144f7d82285562d6fc7ed0aeee754-kubeconfig") pod "kube-controller-manager-nospam-20201113231417-7409" (UID: "6cb144f7d82285562d6fc7ed0aeee754")
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.815571    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/6cb144f7d82285562d6fc7ed0aeee754-usr-share-ca-certificates") pod "kube-controller-manager-nospam-20201113231417-7409" (UID: "6cb144f7d82285562d6fc7ed0aeee754")
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.815903    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/38744c90661b22e9ae232b0452c54538-kubeconfig") pod "kube-scheduler-nospam-20201113231417-7409" (UID: "38744c90661b22e9ae232b0452c54538")
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.818573    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/552dc2aa9de19317813f734e107fb51a-ca-certs") pod "kube-apiserver-nospam-20201113231417-7409" (UID: "552dc2aa9de19317813f734e107fb51a")
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.819834    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/552dc2aa9de19317813f734e107fb51a-usr-share-ca-certificates") pod "kube-apiserver-nospam-20201113231417-7409" (UID: "552dc2aa9de19317813f734e107fb51a")
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.820317    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/552dc2aa9de19317813f734e107fb51a-k8s-certs") pod "kube-apiserver-nospam-20201113231417-7409" (UID: "552dc2aa9de19317813f734e107fb51a")
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.822298    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/6cb144f7d82285562d6fc7ed0aeee754-ca-certs") pod "kube-controller-manager-nospam-20201113231417-7409" (UID: "6cb144f7d82285562d6fc7ed0aeee754")
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.830704    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/6cb144f7d82285562d6fc7ed0aeee754-flexvolume-dir") pod "kube-controller-manager-nospam-20201113231417-7409" (UID: "6cb144f7d82285562d6fc7ed0aeee754")
	* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.843712    3869 reconciler.go:157] Reconciler: start to sync state
	* Nov 13 23:15:39 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:39.024699    3869 topology_manager.go:233] [topologymanager] Topology Admit Handler
	* Nov 13 23:15:39 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:39.215881    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/71650814-b622-4a12-ad77-4348de106142-lib-modules") pod "kube-proxy-ksrvl" (UID: "71650814-b622-4a12-ad77-4348de106142")
	* Nov 13 23:15:39 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:39.220857    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/71650814-b622-4a12-ad77-4348de106142-xtables-lock") pod "kube-proxy-ksrvl" (UID: "71650814-b622-4a12-ad77-4348de106142")
	* Nov 13 23:15:39 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:39.224026    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/71650814-b622-4a12-ad77-4348de106142-kube-proxy") pod "kube-proxy-ksrvl" (UID: "71650814-b622-4a12-ad77-4348de106142")
	* Nov 13 23:15:39 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:39.226824    3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-mmntq" (UniqueName: "kubernetes.io/secret/71650814-b622-4a12-ad77-4348de106142-kube-proxy-token-mmntq") pod "kube-proxy-ksrvl" (UID: "71650814-b622-4a12-ad77-4348de106142")
	* Nov 13 23:15:40 nospam-20201113231417-7409 kubelet[3869]: W1113 23:15:40.646125    3869 pod_container_deletor.go:79] Container "537e295d8dba5b895e1f60092f2066c2564ebf341f0579e0b7d6e071927dd765" not found in pod's containers

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p nospam-20201113231417-7409 -n nospam-20201113231417-7409
helpers_test.go:255: (dbg) Run:  kubectl --context nospam-20201113231417-7409 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: coredns-f9fd979d6-ng68r storage-provisioner
helpers_test.go:263: ======> post-mortem[TestErrorSpam]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context nospam-20201113231417-7409 describe pod coredns-f9fd979d6-ng68r storage-provisioner
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context nospam-20201113231417-7409 describe pod coredns-f9fd979d6-ng68r storage-provisioner: exit status 1 (144.889966ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-f9fd979d6-ng68r" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context nospam-20201113231417-7409 describe pod coredns-f9fd979d6-ng68r storage-provisioner: exit status 1
helpers_test.go:171: Cleaning up "nospam-20201113231417-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p nospam-20201113231417-7409
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p nospam-20201113231417-7409: (1.188552224s)
--- FAIL: TestErrorSpam (88.77s)

                                                
                                    
x
+
TestScheduledStop (114.4s)

                                                
                                                
=== RUN   TestScheduledStop
scheduled_stop_test.go:74: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20201113230548-7409 --driver=kvm2 
scheduled_stop_test.go:74: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20201113230548-7409 --driver=kvm2 : (1m4.837394111s)
scheduled_stop_test.go:82: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20201113230548-7409 --schedule 5m
scheduled_stop_test.go:114: signal error was:  <nil>
scheduled_stop_test.go:82: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20201113230548-7409 --schedule 10s
scheduled_stop_test.go:114: signal error was:  os: process already finished
scheduled_stop_test.go:61: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201113230548-7409 -n scheduled-stop-20201113230548-7409
scheduled_stop_test.go:61: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201113230548-7409 -n scheduled-stop-20201113230548-7409
scheduled_stop_test.go:61: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201113230548-7409 -n scheduled-stop-20201113230548-7409
scheduled_stop_test.go:61: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201113230548-7409 -n scheduled-stop-20201113230548-7409
scheduled_stop_test.go:61: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201113230548-7409 -n scheduled-stop-20201113230548-7409
scheduled_stop_test.go:61: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201113230548-7409 -n scheduled-stop-20201113230548-7409
scheduled_stop_test.go:61: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201113230548-7409 -n scheduled-stop-20201113230548-7409: exit status 3 (34.839625377s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1113 23:07:42.462512   16124 status.go:359] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	E1113 23:07:42.462669   16124 status.go:232] status error: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host

                                                
                                                
** /stderr **
scheduled_stop_test.go:61: status error: exit status 3 (may be ok)
scheduled_stop_test.go:68: error expected post-stop host status to be -"Stopped"- but got *"Error"*
panic.go:617: *** TestScheduledStop FAILED at 2020-11-13 23:07:42.464870287 +0000 UTC m=+1367.398255445
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201113230548-7409 -n scheduled-stop-20201113230548-7409
helpers_test.go:233: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20201113230548-7409 -n scheduled-stop-20201113230548-7409: exit status 7 (92.587579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:233: status error: exit status 7 (may be ok)
helpers_test.go:235: "scheduled-stop-20201113230548-7409" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:171: Cleaning up "scheduled-stop-20201113230548-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20201113230548-7409
--- FAIL: TestScheduledStop (114.40s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/VerifyKubernetesImages (6.2s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p crio-20201113234030-7409 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201113225438-7409
start_stop_delete_test.go:232: v1.15.7 images mismatch (-want +got):
[]string{
	... // 6 identical elements
	"k8s.gcr.io/kube-scheduler:v1.15.7",
	"k8s.gcr.io/pause:3.1",
+ 	"k8s.gcr.io/pause:3.2",
	"kubernetesui/dashboard:v2.0.3",
	"kubernetesui/metrics-scraper:v1.0.4",
}
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201113234030-7409 -n crio-20201113234030-7409
helpers_test.go:238: <<< TestStartStop/group/crio/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/crio/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p crio-20201113234030-7409 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p crio-20201113234030-7409 logs -n 25: (2.003079106s)
helpers_test.go:246: TestStartStop/group/crio/serial/VerifyKubernetesImages logs: 
-- stdout --
	* ==> CRI-O <==
	* -- Logs begin at Fri 2020-11-13 23:46:27 UTC, end at Fri 2020-11-13 23:49:41 UTC. --
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.047440257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=447ca386-90d3-4589-978d-fab047ef272c name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.047587273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=447ca386-90d3-4589-978d-fab047ef272c name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.048080497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=447ca386-90d3-4589-978d-fab047ef272c name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.084458677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ecdfb584-8b0b-43f0-a80d-c67f23d27560 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.084539165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=ecdfb584-8b0b-43f0-a80d-c67f23d27560 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.084971117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ecdfb584-8b0b-43f0-a80d-c67f23d27560 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.091724160Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="go-grpc-middleware/chain.go:25" id=3cbc6800-24ba-410c-9130-c3e9795cfb76 name=/runtime.v1alpha2.RuntimeService/Version
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.091835978Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.18.3,RuntimeApiVersion:v1alpha1,}" file="go-grpc-middleware/chain.go:25" id=3cbc6800-24ba-410c-9130-c3e9795cfb76 name=/runtime.v1alpha2.RuntimeService/Version
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.120377729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1f4be365-93f8-41ad-b836-91a0420fa973 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.120457684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=1f4be365-93f8-41ad-b836-91a0420fa973 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.120966805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1f4be365-93f8-41ad-b836-91a0420fa973 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.155961692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6647127c-ba54-4b6b-a82f-5f538fc62fea name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.156127559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=6647127c-ba54-4b6b-a82f-5f538fc62fea name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.156411913Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6647127c-ba54-4b6b-a82f-5f538fc62fea name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.185282370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ca5fa58f-2a5a-4059-9b68-07c721c6741d name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.185448514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=ca5fa58f-2a5a-4059-9b68-07c721c6741d name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.185880576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ca5fa58f-2a5a-4059-9b68-07c721c6741d name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.191471769Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=74b771fc-bead-470d-9c48-ea66b10fc81e name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.192064833Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-c8b69c96c-tr9cr,Uid:e64cb49d-a4bd-46c7-b3db-edec824639fe,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311297598853470,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,k8s-app: dashboard-metrics-scraper,pod-template-hash: c8b69c96c,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:16.318254675Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:bd016b3a26069df4139bcd83638273850a7240dc518692
2d3c6942077fc1041b,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-5ddb79bb9f-ndvzw,Uid:fb40b588-4533-4fa9-a315-47b33b420ed6,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311297485761485,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,k8s-app: kubernetes-dashboard,pod-template-hash: 5ddb79bb9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:16.206084951Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d54d7102-688e-4eb4-aa53-2a59b7110ab4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311295961287557,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration
-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa53-2a59b7110ab4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v3\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2020-11-13T23:48:00.635445403Z,kubernetes.io/co
nfig.source: api,},RuntimeHandler:,},&PodSandbox{Id:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&PodSandboxMetadata{Name:kube-proxy-7pmsg,Uid:644f9ac0-e218-41b1-aadd-518dce1505e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311294933927148,Labels:map[string]string{controller-revision-hash: 65fbbbc6cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:00.635435839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5d14afba-9944-433e-818d-c5969fd23efc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285269261714,Labels:map[string]string{integration-test: busybox,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:00.635402634Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&PodSandboxMetadata{Name:coredns-5d4dd4b4db-9nvsc,Uid:7678ab1d-33c2-4613-ab05-593cc3a77698,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285184954074,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,k8s-app: kube-dns,pod-template-hash: 5d4dd4b4db,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:00.63541483Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e394
0c72687039b4b33,Metadata:&PodSandboxMetadata{Name:etcd-crio-20201113234030-7409,Uid:a7f93bc1ca1318454e4f387d079d3ed3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285129937171,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a7f93bc1ca1318454e4f387d079d3ed3,kubernetes.io/config.seen: 2020-11-13T23:47:45.887364563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-crio-20201113234030-7409,Uid:b61122dbf61c0657d93f147ac231888c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285073583437,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b61122dbf61c0657d93f147ac231888c,kubernetes.io/config.seen: 2020-11-13T23:47:45.887391834Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&PodSandboxMetadata{Name:kube-scheduler-crio-20201113234030-7409,Uid:d56135b6f61d5db3f635e70693e7224d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285006517850,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d56135b6f61d5db3f635e7069
3e7224d,kubernetes.io/config.seen: 2020-11-13T23:47:45.887395526Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-crio-20201113234030-7409,Uid:15b29d223d9edf898051121d1f2e3d54,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311284945190936,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15b29d223d9edf898051121d1f2e3d54,kubernetes.io/config.seen: 2020-11-13T23:47:45.887386472Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=74b771fc-bead-470d-9c48-ea66b10fc81e name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.193517420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=11194920-0bb0-47a4-b4b5-f2949aca398e name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.193575514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=11194920-0bb0-47a4-b4b5-f2949aca398e name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.194503385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=11194920-0bb0-47a4-b4b5-f2949aca398e name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.226193932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f21c6a91-8de0-4335-ac96-6915863ba3cf name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.226372879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=f21c6a91-8de0-4335-ac96-6915863ba3cf name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:41 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:41.226746860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f21c6a91-8de0-4335-ac96-6915863ba3cf name=/runtime.v1alpha2.RuntimeService/ListContainers
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                        ATTEMPT             POD ID
	* 46b11ae2e59f5       86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4                                    About a minute ago   Running             dashboard-metrics-scraper   0                   18243ba7ebb0e
	* c452ced717e6c       503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2                                    About a minute ago   Running             kubernetes-dashboard        0                   bd016b3a26069
	* 855d24364ab05       bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289                                    About a minute ago   Running             storage-provisioner         0                   582067d0e761e
	* c35617cb7d28c       ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f                                    About a minute ago   Running             kube-proxy                  0                   771b1ff5051d5
	* 7eb552abd8bd7       docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   About a minute ago   Running             busybox                     0                   c3155d201b16b
	* 402919d538146       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c                                    About a minute ago   Running             coredns                     0                   59f33c7f0806c
	* d13de2f151851       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d                                    About a minute ago   Running             etcd                        0                   0bf87146b75ec
	* fdae31c491365       78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367                                    About a minute ago   Running             kube-scheduler              0                   2b89e7772256a
	* 5b8f21e69e2c5       c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264                                    About a minute ago   Running             kube-apiserver              0                   00d34b55df0ae
	* d65b395688c78       d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2                                    About a minute ago   Running             kube-controller-manager     0                   244887518e383
	* 
	* ==> coredns [402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135] <==
	* .:53
	* 2020-11-13T23:44:46.733Z [INFO] CoreDNS-1.3.1
	* 2020-11-13T23:44:46.733Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	* CoreDNS-1.3.1
	* linux/amd64, go1.11.4, 6b56a9c
	* 2020-11-13T23:44:46.733Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
	* E1113 23:45:11.741845       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1113 23:45:11.747587       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1113 23:45:11.752962       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* .:53
	* 2020-11-13T23:48:11.933Z [INFO] CoreDNS-1.3.1
	* 2020-11-13T23:48:11.933Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	* CoreDNS-1.3.1
	* linux/amd64, go1.11.4, 6b56a9c
	* 2020-11-13T23:48:11.933Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
	* E1113 23:48:36.929755       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1113 23:48:36.930037       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1113 23:48:36.930119       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* 
	* ==> describe nodes <==
	* Name:               crio-20201113234030-7409
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=crio-20201113234030-7409
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=f1624ef53a2521d2c375e24d59fe2d2c53b4ded0
	*                     minikube.k8s.io/name=crio-20201113234030-7409
	*                     minikube.k8s.io/updated_at=2020_11_13T23_44_24_0700
	*                     minikube.k8s.io/version=v1.15.0
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Fri, 13 Nov 2020 23:44:17 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Fri, 13 Nov 2020 23:49:21 +0000   Fri, 13 Nov 2020 23:44:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Fri, 13 Nov 2020 23:49:21 +0000   Fri, 13 Nov 2020 23:44:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Fri, 13 Nov 2020 23:49:21 +0000   Fri, 13 Nov 2020 23:44:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Fri, 13 Nov 2020 23:49:21 +0000   Fri, 13 Nov 2020 23:44:08 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.39.90
	*   Hostname:    crio-20201113234030-7409
	* Capacity:
	*  cpu:                2
	*  ephemeral-storage:  16954224Ki
	*  hugepages-2Mi:      0
	*  memory:             2083920Ki
	*  pods:               110
	* Allocatable:
	*  cpu:                2
	*  ephemeral-storage:  16954224Ki
	*  hugepages-2Mi:      0
	*  memory:             2083920Ki
	*  pods:               110
	* System Info:
	*  Machine ID:                 f7c0699993ae41a787da051015c1dedd
	*  System UUID:                f7c06999-93ae-41a7-87da-051015c1dedd
	*  Boot ID:                    717f6206-1b5a-4ca0-b7f6-048357351ba2
	*  Kernel Version:             4.19.150
	*  OS Image:                   Buildroot 2020.02.7
	*  Operating System:           linux
	*  Architecture:               amd64
	*  Container Runtime Version:  cri-o://1.18.3
	*  Kubelet Version:            v1.15.7
	*  Kube-Proxy Version:         v1.15.7
	* PodCIDR:                     10.244.0.0/24
	* Non-terminated Pods:         (10 in total)
	*   Namespace                  Name                                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                  ----                                                ------------  ----------  ---------------  -------------  ---
	*   default                    busybox                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	*   kube-system                coredns-5d4dd4b4db-9nvsc                            100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m5s
	*   kube-system                etcd-crio-20201113234030-7409                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	*   kube-system                kube-apiserver-crio-20201113234030-7409             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	*   kube-system                kube-controller-manager-crio-20201113234030-7409    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	*   kube-system                kube-proxy-7pmsg                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	*   kube-system                kube-scheduler-crio-20201113234030-7409             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m25s
	*   kube-system                storage-provisioner                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	*   kubernetes-dashboard       dashboard-metrics-scraper-c8b69c96c-tr9cr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	*   kubernetes-dashboard       kubernetes-dashboard-5ddb79bb9f-ndvzw               0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                650m (32%)  0 (0%)
	*   memory             70Mi (3%)   170Mi (8%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                    From                                  Message
	*   ----    ------                   ----                   ----                                  -------
	*   Normal  NodeHasSufficientMemory  5m39s (x8 over 5m40s)  kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    5m39s (x7 over 5m40s)  kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     5m39s (x8 over 5m40s)  kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasSufficientPID
	*   Normal  Starting                 4m59s                  kube-proxy, crio-20201113234030-7409  Starting kube-proxy.
	*   Normal  Starting                 116s                   kubelet, crio-20201113234030-7409     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  115s (x8 over 115s)    kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    115s (x8 over 115s)    kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     115s (x7 over 115s)    kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  115s                   kubelet, crio-20201113234030-7409     Updated Node Allocatable limit across pods
	*   Normal  Starting                 85s                    kube-proxy, crio-20201113234030-7409  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [Nov13 23:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	* [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	* [  +0.156653] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	* [  +5.746668] Unstable clock detected, switching default tracing clock to "global"
	*               If you want to keep using the local clock, then add:
	*                 "trace_clock=local"
	*               on the kernel command line
	* [  +0.000051] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	* [  +5.433386] systemd-fstab-generator[1157]: Ignoring "noauto" for root device
	* [  +0.061982] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	* [  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	* [  +1.803370] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1714 comm=systemd-network
	* [  +1.042447] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	* [  +0.935130] vboxguest: loading out-of-tree module taints kernel.
	* [  +0.009214] vboxguest: PCI device not found, probably running on physical hardware.
	* [  +2.961942] systemd-fstab-generator[2050]: Ignoring "noauto" for root device
	* [Nov13 23:47] systemd-fstab-generator[3045]: Ignoring "noauto" for root device
	* [Nov13 23:48] systemd-fstab-generator[3494]: Ignoring "noauto" for root device
	* [  +1.258214] kauditd_printk_skb: 20 callbacks suppressed
	* [  +2.043611] tee (3884): /proc/3303/oom_adj is deprecated, please use /proc/3303/oom_score_adj instead.
	* [ +10.914197] kauditd_printk_skb: 20 callbacks suppressed
	* [ +16.316495] NFSD: Unable to end grace period: -110
	* [ +13.082057] kauditd_printk_skb: 71 callbacks suppressed
	* 
	* ==> etcd [d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769] <==
	* 2020-11-13 23:47:52.914771 I | raft: 8d381aaacda0b9bd became follower at term 2
	* 2020-11-13 23:47:52.914838 I | raft: newRaft 8d381aaacda0b9bd [peers: [], term: 2, commit: 498, applied: 0, lastindex: 498, lastterm: 2]
	* 2020-11-13 23:47:52.937184 W | auth: simple token is not cryptographically signed
	* 2020-11-13 23:47:52.945436 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
	* 2020-11-13 23:47:52.949875 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-13 23:47:52.951114 I | etcdserver/membership: added member 8d381aaacda0b9bd [https://192.168.39.90:2380] to cluster 8cf3a1558a63fa9e
	* 2020-11-13 23:47:52.951336 N | etcdserver/membership: set the initial cluster version to 3.3
	* 2020-11-13 23:47:52.951429 I | etcdserver/api: enabled capabilities for version 3.3
	* 2020-11-13 23:47:52.954717 I | embed: listening for metrics on http://192.168.39.90:2381
	* 2020-11-13 23:47:52.956886 I | embed: listening for metrics on http://127.0.0.1:2381
	* 2020-11-13 23:47:54.615842 I | raft: 8d381aaacda0b9bd is starting a new election at term 2
	* 2020-11-13 23:47:54.615902 I | raft: 8d381aaacda0b9bd became candidate at term 3
	* 2020-11-13 23:47:54.615947 I | raft: 8d381aaacda0b9bd received MsgVoteResp from 8d381aaacda0b9bd at term 3
	* 2020-11-13 23:47:54.615964 I | raft: 8d381aaacda0b9bd became leader at term 3
	* 2020-11-13 23:47:54.615978 I | raft: raft.node: 8d381aaacda0b9bd elected leader 8d381aaacda0b9bd at term 3
	* 2020-11-13 23:47:54.616478 I | etcdserver: published {Name:crio-20201113234030-7409 ClientURLs:[https://192.168.39.90:2379]} to cluster 8cf3a1558a63fa9e
	* 2020-11-13 23:47:54.617391 I | embed: ready to serve client requests
	* 2020-11-13 23:47:54.617898 I | embed: ready to serve client requests
	* 2020-11-13 23:47:54.620053 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-13 23:47:54.620114 I | embed: serving client requests on 192.168.39.90:2379
	* proto: no coders for int
	* proto: no encoder for ValueSize int [GetProperties]
	* 2020-11-13 23:48:04.035036 W | etcdserver: request "header:<ID:13383983152324138109 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" value_size:490 lease:4160611115469362065 >> failure:<>>" with result "size:16" took too long (211.056157ms) to execute
	* 2020-11-13 23:48:04.035324 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-crio-20201113234030-7409\" " with result "range_response_count:1 size:2169" took too long (182.371843ms) to execute
	* 2020-11-13 23:48:04.036068 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/deployment-controller\" " with result "range_response_count:1 size:260" took too long (150.713278ms) to execute
	* 
	* ==> kernel <==
	*  23:49:42 up 3 min,  0 users,  load average: 1.17, 0.88, 0.37
	* Linux crio-20201113234030-7409 4.19.150 #1 SMP Fri Nov 6 15:58:07 PST 2020 x86_64 GNU/Linux
	* PRETTY_NAME="Buildroot 2020.02.7"
	* 
	* ==> kube-apiserver [5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8] <==
	* I1113 23:48:00.466541       1 controller.go:81] Starting OpenAPI AggregationController
	* I1113 23:48:00.481008       1 crdregistration_controller.go:112] Starting crd-autoregister controller
	* I1113 23:48:00.481290       1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
	* I1113 23:48:00.481443       1 controller.go:83] Starting OpenAPI controller
	* I1113 23:48:00.481469       1 customresource_discovery_controller.go:208] Starting DiscoveryController
	* I1113 23:48:00.481579       1 naming_controller.go:288] Starting NamingConditionController
	* I1113 23:48:00.481675       1 establishing_controller.go:73] Starting EstablishingController
	* I1113 23:48:00.481705       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	* I1113 23:48:00.711700       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1113 23:48:00.753945       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* E1113 23:48:00.763870       1 controller.go:148] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	* I1113 23:48:00.767829       1 cache.go:39] Caches are synced for autoregister controller
	* I1113 23:48:00.768707       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1113 23:48:00.782379       1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
	* I1113 23:48:01.461948       1 controller.go:107] OpenAPI AggregationController: Processing item 
	* I1113 23:48:01.462117       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I1113 23:48:01.462141       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I1113 23:48:01.521694       1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
	* I1113 23:48:05.754875       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* I1113 23:48:05.803393       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I1113 23:48:05.934325       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1113 23:48:05.956757       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I1113 23:48:05.972011       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* I1113 23:48:15.938177       1 controller.go:606] quota admission added evaluator for: endpoints
	* I1113 23:48:15.953919       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	* 
	* ==> kube-controller-manager [d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c] <==
	* I1113 23:48:15.889421       1 controller_utils.go:1036] Caches are synced for job controller
	* I1113 23:48:15.889926       1 controller_utils.go:1036] Caches are synced for persistent volume controller
	* I1113 23:48:15.906493       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* I1113 23:48:15.920300       1 controller_utils.go:1036] Caches are synced for ReplicationController controller
	* I1113 23:48:15.927975       1 controller_utils.go:1036] Caches are synced for endpoint controller
	* I1113 23:48:15.936570       1 controller_utils.go:1036] Caches are synced for certificate controller
	* I1113 23:48:15.936758       1 controller_utils.go:1036] Caches are synced for stateful set controller
	* I1113 23:48:15.950485       1 controller_utils.go:1036] Caches are synced for deployment controller
	* I1113 23:48:15.966548       1 controller_utils.go:1036] Caches are synced for ReplicaSet controller
	* I1113 23:48:15.975700       1 controller_utils.go:1036] Caches are synced for HPA controller
	* I1113 23:48:15.978474       1 controller_utils.go:1036] Caches are synced for disruption controller
	* I1113 23:48:15.978727       1 disruption.go:338] Sending events to api server.
	* I1113 23:48:15.980309       1 controller_utils.go:1036] Caches are synced for daemon sets controller
	* I1113 23:48:15.986404       1 controller_utils.go:1036] Caches are synced for GC controller
	* I1113 23:48:15.994485       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"5a8a82df-e730-4a28-b914-5a9a8cee3b22", APIVersion:"apps/v1", ResourceVersion:"542", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5ddb79bb9f to 1
	* I1113 23:48:15.994740       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"7ab9e3df-6697-4500-b4c6-c73bd770cc9b", APIVersion:"apps/v1", ResourceVersion:"541", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-c8b69c96c to 1
	* I1113 23:48:16.024567       1 controller_utils.go:1036] Caches are synced for certificate controller
	* I1113 23:48:16.104046       1 controller_utils.go:1036] Caches are synced for attach detach controller
	* I1113 23:48:16.137162       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* I1113 23:48:16.160240       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5ddb79bb9f", UID:"a9a41991-d8b8-41d2-b875-a4a3b3ef1119", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5ddb79bb9f-ndvzw
	* I1113 23:48:16.174252       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* I1113 23:48:16.174276       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1113 23:48:16.201676       1 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
	* I1113 23:48:16.212139       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-c8b69c96c", UID:"17bb9d23-9ea0-45db-8bed-33e607ac22e5", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-c8b69c96c-tr9cr
	* I1113 23:48:16.302483       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* 
	* ==> kube-proxy [c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7] <==
	* I1113 23:44:42.089010       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
	* I1113 23:44:42.089479       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I1113 23:44:42.090218       1 conntrack.go:83] Setting conntrack hashsize to 32768
	* I1113 23:44:42.094532       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1113 23:44:42.095353       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1113 23:44:42.095907       1 config.go:187] Starting service config controller
	* I1113 23:44:42.096148       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
	* I1113 23:44:42.096323       1 config.go:96] Starting endpoints config controller
	* I1113 23:44:42.096437       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
	* I1113 23:44:42.212138       1 controller_utils.go:1036] Caches are synced for endpoints config controller
	* I1113 23:44:42.212654       1 controller_utils.go:1036] Caches are synced for service config controller
	* W1113 23:48:16.766866       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
	* I1113 23:48:16.799093       1 server_others.go:143] Using iptables Proxier.
	* I1113 23:48:16.802735       1 server.go:534] Version: v1.15.7
	* I1113 23:48:16.829550       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
	* I1113 23:48:16.830922       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I1113 23:48:16.831892       1 conntrack.go:83] Setting conntrack hashsize to 32768
	* I1113 23:48:16.837234       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1113 23:48:16.837413       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1113 23:48:16.837743       1 config.go:187] Starting service config controller
	* I1113 23:48:16.837879       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
	* I1113 23:48:16.838264       1 config.go:96] Starting endpoints config controller
	* I1113 23:48:16.838317       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
	* I1113 23:48:16.939206       1 controller_utils.go:1036] Caches are synced for endpoints config controller
	* I1113 23:48:16.939320       1 controller_utils.go:1036] Caches are synced for service config controller
	* 
	* ==> kube-scheduler [fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4] <==
	* E1113 23:44:18.112180       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1113 23:44:18.113471       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1113 23:44:18.116039       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1113 23:44:18.116450       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1113 23:44:18.119542       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* I1113 23:47:52.832971       1 serving.go:319] Generated self-signed cert in-memory
	* W1113 23:47:53.416824       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	* W1113 23:47:53.416947       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	* W1113 23:47:53.416978       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	* I1113 23:47:53.428947       1 server.go:142] Version: v1.15.7
	* I1113 23:47:53.429095       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	* W1113 23:47:53.431157       1 authorization.go:47] Authorization is disabled
	* W1113 23:47:53.431248       1 authentication.go:55] Authentication is disabled
	* I1113 23:47:53.431296       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	* I1113 23:47:53.432572       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	* E1113 23:48:00.577976       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1113 23:48:00.658731       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1113 23:48:00.695981       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1113 23:48:00.696447       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1113 23:48:00.696584       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1113 23:48:00.696844       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1113 23:48:00.696998       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1113 23:48:00.697255       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1113 23:48:00.697564       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1113 23:48:00.701744       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2020-11-13 23:46:27 UTC, end at Fri 2020-11-13 23:49:42 UTC. --
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.298975    3053 kuberuntime_manager.go:709] Failed to get pod sandbox status: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"; Skipping pod "coredns-5d4dd4b4db-9nvsc_kube-system(7678ab1d-33c2-4613-ab05-593cc3a77698)"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.299022    3053 pod_workers.go:190] Error syncing pod 7678ab1d-33c2-4613-ab05-593cc3a77698 ("coredns-5d4dd4b4db-9nvsc_kube-system(7678ab1d-33c2-4613-ab05-593cc3a77698)"), skipping: failed to SyncPod: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.330230    3053 remote_runtime.go:182] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.330302    3053 kuberuntime_sandbox.go:210] ListPodSandbox failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.330317    3053 generic.go:205] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.339587    3053 kuberuntime_manager.go:709] Failed to get pod sandbox status: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"; Skipping pod "busybox_default(5d14afba-9944-433e-818d-c5969fd23efc)"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.339704    3053 pod_workers.go:190] Error syncing pod 5d14afba-9944-433e-818d-c5969fd23efc ("busybox_default(5d14afba-9944-433e-818d-c5969fd23efc)"), skipping: failed to SyncPod: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.331312    3053 remote_runtime.go:182] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.331383    3053 kuberuntime_sandbox.go:210] ListPodSandbox failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.331401    3053 generic.go:205] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:05.625463    3053 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009ae070, CONNECTING
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:05.625490    3053 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009ae070, READY
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.932757    3053 remote_runtime.go:182] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.932841    3053 kuberuntime_sandbox.go:210] ListPodSandbox failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.932856    3053 kubelet_pods.go:1027] Error listing containers: &status.statusError{Code:14, Message:"all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"", Details:[]*any.Any(nil)}
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.932887    3053 kubelet.go:1977] Failed cleaning pods: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:06 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:06.163589    3053 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009da200, CONNECTING
	* Nov 13 23:48:06 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:06.163802    3053 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009da200, READY
	* Nov 13 23:48:16 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:16.249915    3053 reflector.go:125] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-j6zmt": Failed to list *v1.Secret: secrets "kubernetes-dashboard-token-j6zmt" is forbidden: User "system:node:crio-20201113234030-7409" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node "crio-20201113234030-7409" and this object
	* Nov 13 23:48:16 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:16.255893    3053 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/fb40b588-4533-4fa9-a315-47b33b420ed6-tmp-volume") pod "kubernetes-dashboard-5ddb79bb9f-ndvzw" (UID: "fb40b588-4533-4fa9-a315-47b33b420ed6")
	* Nov 13 23:48:16 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:16.256558    3053 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-j6zmt" (UniqueName: "kubernetes.io/secret/fb40b588-4533-4fa9-a315-47b33b420ed6-kubernetes-dashboard-token-j6zmt") pod "kubernetes-dashboard-5ddb79bb9f-ndvzw" (UID: "fb40b588-4533-4fa9-a315-47b33b420ed6")
	* Nov 13 23:48:16 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:16.357989    3053 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-j6zmt" (UniqueName: "kubernetes.io/secret/e64cb49d-a4bd-46c7-b3db-edec824639fe-kubernetes-dashboard-token-j6zmt") pod "dashboard-metrics-scraper-c8b69c96c-tr9cr" (UID: "e64cb49d-a4bd-46c7-b3db-edec824639fe")
	* Nov 13 23:48:16 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:16.358337    3053 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/e64cb49d-a4bd-46c7-b3db-edec824639fe-tmp-volume") pod "dashboard-metrics-scraper-c8b69c96c-tr9cr" (UID: "e64cb49d-a4bd-46c7-b3db-edec824639fe")
	* Nov 13 23:48:46 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:46.104574    3053 manager.go:1084] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7678ab1d_33c2_4613_ab05_593cc3a77698.slice/crio-59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e.scope: Error finding container 59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e: Status 404 returned error &{%!s(*http.body=&{0xc0008a0100 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x737ab0) %!s(func() error=0x737a40)}
	* Nov 13 23:48:46 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:46.107977    3053 manager.go:1084] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d14afba_9944_433e_818d_c5969fd23efc.slice/crio-c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1.scope: Error finding container c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1: Status 404 returned error &{%!s(*http.body=&{0xc000cedbc0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x737ab0) %!s(func() error=0x737a40)}
	* 
	* ==> kubernetes-dashboard [c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008] <==
	* 2020/11/13 23:48:18 Starting overwatch
	* 2020/11/13 23:48:18 Using namespace: kubernetes-dashboard
	* 2020/11/13 23:48:18 Using in-cluster config to connect to apiserver
	* 2020/11/13 23:48:18 Using secret token for csrf signing
	* 2020/11/13 23:48:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	* 2020/11/13 23:48:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	* 2020/11/13 23:48:18 Successful initial request to the apiserver, version: v1.15.7
	* 2020/11/13 23:48:18 Generating JWE encryption key
	* 2020/11/13 23:48:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	* 2020/11/13 23:48:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	* 2020/11/13 23:48:19 Initializing JWE encryption key from synchronized object
	* 2020/11/13 23:48:19 Creating in-cluster Sidecar client
	* 2020/11/13 23:48:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	* 2020/11/13 23:48:19 Serving insecurely on HTTP port: 9090
	* 2020/11/13 23:48:49 Successful request to sidecar
	* 
	* ==> storage-provisioner [855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2] <==
	* I1113 23:44:44.303528       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1113 23:44:44.326430       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1113 23:44:44.327067       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79f37f59-638d-43b0-b081-00b1e98426a1", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crio-20201113234030-7409_65dbb526-704c-44b8-8b61-8819b921f252 became leader
	* I1113 23:44:44.328929       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_crio-20201113234030-7409_65dbb526-704c-44b8-8b61-8819b921f252!
	* I1113 23:44:44.437931       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_crio-20201113234030-7409_65dbb526-704c-44b8-8b61-8819b921f252!
	* I1113 23:48:17.551960       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1113 23:48:34.978092       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1113 23:48:34.979330       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_crio-20201113234030-7409_5db1e5cb-3c36-4154-b0d2-ac07ed7d6edc!
	* I1113 23:48:34.999094       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79f37f59-638d-43b0-b081-00b1e98426a1", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crio-20201113234030-7409_5db1e5cb-3c36-4154-b0d2-ac07ed7d6edc became leader
	* I1113 23:48:35.081477       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_crio-20201113234030-7409_5db1e5cb-3c36-4154-b0d2-ac07ed7d6edc!

                                                
                                                
-- /stdout --
** stderr ** 
	E1113 23:49:42.028225    3063 out.go:286] unable to execute * 2020-11-13 23:48:04.035036 W | etcdserver: request "header:<ID:13383983152324138109 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" value_size:490 lease:4160611115469362065 >> failure:<>>" with result "size:16" took too long (211.056157ms) to execute
	: html/template:* 2020-11-13 23:48:04.035036 W | etcdserver: request "header:<ID:13383983152324138109 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" value_size:490 lease:4160611115469362065 >> failure:<>>" with result "size:16" took too long (211.056157ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201113234030-7409 -n crio-20201113234030-7409
helpers_test.go:255: (dbg) Run:  kubectl --context crio-20201113234030-7409 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestStartStop/group/crio/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context crio-20201113234030-7409 describe pod 
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context crio-20201113234030-7409 describe pod : exit status 1 (82.947778ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context crio-20201113234030-7409 describe pod : exit status 1
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:233: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201113234030-7409 -n crio-20201113234030-7409
helpers_test.go:238: <<< TestStartStop/group/crio/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/crio/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p crio-20201113234030-7409 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p crio-20201113234030-7409 logs -n 25: (2.101972154s)
helpers_test.go:246: TestStartStop/group/crio/serial/VerifyKubernetesImages logs: 
-- stdout --
	* ==> CRI-O <==
	* -- Logs begin at Fri 2020-11-13 23:46:27 UTC, end at Fri 2020-11-13 23:49:44 UTC. --
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.932765061Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1e32300c-1809-4cc4-9693-82f7fb824e71 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.933139266Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-c8b69c96c-tr9cr,Uid:e64cb49d-a4bd-46c7-b3db-edec824639fe,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311297598853470,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,k8s-app: dashboard-metrics-scraper,pod-template-hash: c8b69c96c,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:16.318254675Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:bd016b3a26069df4139bcd83638273850a7240dc518692
2d3c6942077fc1041b,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-5ddb79bb9f-ndvzw,Uid:fb40b588-4533-4fa9-a315-47b33b420ed6,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311297485761485,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,k8s-app: kubernetes-dashboard,pod-template-hash: 5ddb79bb9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:16.206084951Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d54d7102-688e-4eb4-aa53-2a59b7110ab4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311295961287557,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration
-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa53-2a59b7110ab4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v3\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2020-11-13T23:48:00.635445403Z,kubernetes.io/co
nfig.source: api,},RuntimeHandler:,},&PodSandbox{Id:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&PodSandboxMetadata{Name:kube-proxy-7pmsg,Uid:644f9ac0-e218-41b1-aadd-518dce1505e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311294933927148,Labels:map[string]string{controller-revision-hash: 65fbbbc6cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:00.635435839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5d14afba-9944-433e-818d-c5969fd23efc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285269261714,Labels:map[string]string{integration-test: busybox,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:00.635402634Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&PodSandboxMetadata{Name:coredns-5d4dd4b4db-9nvsc,Uid:7678ab1d-33c2-4613-ab05-593cc3a77698,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285184954074,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,k8s-app: kube-dns,pod-template-hash: 5d4dd4b4db,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:00.63541483Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e394
0c72687039b4b33,Metadata:&PodSandboxMetadata{Name:etcd-crio-20201113234030-7409,Uid:a7f93bc1ca1318454e4f387d079d3ed3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285129937171,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a7f93bc1ca1318454e4f387d079d3ed3,kubernetes.io/config.seen: 2020-11-13T23:47:45.887364563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-crio-20201113234030-7409,Uid:b61122dbf61c0657d93f147ac231888c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285073583437,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b61122dbf61c0657d93f147ac231888c,kubernetes.io/config.seen: 2020-11-13T23:47:45.887391834Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&PodSandboxMetadata{Name:kube-scheduler-crio-20201113234030-7409,Uid:d56135b6f61d5db3f635e70693e7224d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285006517850,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d56135b6f61d5db3f635e7069
3e7224d,kubernetes.io/config.seen: 2020-11-13T23:47:45.887395526Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-crio-20201113234030-7409,Uid:15b29d223d9edf898051121d1f2e3d54,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311284945190936,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15b29d223d9edf898051121d1f2e3d54,kubernetes.io/config.seen: 2020-11-13T23:47:45.887386472Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=1e32300c-1809-4cc4-9693-82f7fb824e71 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.939948082Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6de6e318-64d2-4b5c-b31d-40a155a391a1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.940408976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=6de6e318-64d2-4b5c-b31d-40a155a391a1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.941250145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6de6e318-64d2-4b5c-b31d-40a155a391a1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.947318473Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e4079419-44ec-454c-931e-1f0836cda361 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.948283558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0f586dd2-ddf5-4d98-8c7b-06165c3f1d7a name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.948969786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=0f586dd2-ddf5-4d98-8c7b-06165c3f1d7a name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.948764987Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-c8b69c96c-tr9cr,Uid:e64cb49d-a4bd-46c7-b3db-edec824639fe,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311297598853470,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,k8s-app: dashboard-metrics-scraper,pod-template-hash: c8b69c96c,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:16.318254675Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:bd016b3a26069df4139bcd83638273850a7240dc518692
2d3c6942077fc1041b,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-5ddb79bb9f-ndvzw,Uid:fb40b588-4533-4fa9-a315-47b33b420ed6,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311297485761485,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,k8s-app: kubernetes-dashboard,pod-template-hash: 5ddb79bb9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:16.206084951Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d54d7102-688e-4eb4-aa53-2a59b7110ab4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311295961287557,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration
-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa53-2a59b7110ab4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v3\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2020-11-13T23:48:00.635445403Z,kubernetes.io/co
nfig.source: api,},RuntimeHandler:,},&PodSandbox{Id:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&PodSandboxMetadata{Name:kube-proxy-7pmsg,Uid:644f9ac0-e218-41b1-aadd-518dce1505e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311294933927148,Labels:map[string]string{controller-revision-hash: 65fbbbc6cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:00.635435839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5d14afba-9944-433e-818d-c5969fd23efc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285269261714,Labels:map[string]string{integration-test: busybox,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:00.635402634Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&PodSandboxMetadata{Name:coredns-5d4dd4b4db-9nvsc,Uid:7678ab1d-33c2-4613-ab05-593cc3a77698,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285184954074,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,k8s-app: kube-dns,pod-template-hash: 5d4dd4b4db,},Annotations:map[string]string{kubernetes.io/config.seen: 2020-11-13T23:48:00.63541483Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e394
0c72687039b4b33,Metadata:&PodSandboxMetadata{Name:etcd-crio-20201113234030-7409,Uid:a7f93bc1ca1318454e4f387d079d3ed3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285129937171,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a7f93bc1ca1318454e4f387d079d3ed3,kubernetes.io/config.seen: 2020-11-13T23:47:45.887364563Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-crio-20201113234030-7409,Uid:b61122dbf61c0657d93f147ac231888c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285073583437,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b61122dbf61c0657d93f147ac231888c,kubernetes.io/config.seen: 2020-11-13T23:47:45.887391834Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&PodSandboxMetadata{Name:kube-scheduler-crio-20201113234030-7409,Uid:d56135b6f61d5db3f635e70693e7224d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311285006517850,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d56135b6f61d5db3f635e7069
3e7224d,kubernetes.io/config.seen: 2020-11-13T23:47:45.887395526Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-crio-20201113234030-7409,Uid:15b29d223d9edf898051121d1f2e3d54,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1605311284945190936,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15b29d223d9edf898051121d1f2e3d54,kubernetes.io/config.seen: 2020-11-13T23:47:45.887386472Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=e4079419-44ec-454c-931e-1f0836cda361 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.950276969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0f586dd2-ddf5-4d98-8c7b-06165c3f1d7a name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.951567982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=81352955-49b5-4ad2-ab6d-3ce5ef892027 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.952923204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=81352955-49b5-4ad2-ab6d-3ce5ef892027 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.954113051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=81352955-49b5-4ad2-ab6d-3ce5ef892027 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.988907976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=68e1ba5b-8acb-4b3a-be17-f2b90677f3b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.989066661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=68e1ba5b-8acb-4b3a-be17-f2b90677f3b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:43 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:43.989411125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=68e1ba5b-8acb-4b3a-be17-f2b90677f3b6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:44 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:44.022723453Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a7e63ffe-5b2d-401d-82a1-5b6d34b47909 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:44 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:44.022901563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=a7e63ffe-5b2d-401d-82a1-5b6d34b47909 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:44 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:44.023190820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a7e63ffe-5b2d-401d-82a1-5b6d34b47909 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:44 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:44.065709051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=930cbd8e-88e3-4e29-bed7-0c381528ecf4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:44 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:44.066161905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=930cbd8e-88e3-4e29-bed7-0c381528ecf4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:44 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:44.067334592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=930cbd8e-88e3-4e29-bed7-0c381528ecf4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:44 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:44.099896242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5f4872a6-f547-4283-92e3-6b0aecb3dbd5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:44 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:44.099978823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:59" id=5f4872a6-f547-4283-92e3-6b0aecb3dbd5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* Nov 13 23:49:44 crio-20201113234030-7409 crio[3710]: time="2020-11-13 23:49:44.100297884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46b11ae2e59f58c5b764a3dfa5d1a376eed7038118e3d501f3918f28be76d8bb,PodSandboxId:18243ba7ebb0e5993683e8679909d7ad366b46a9e90b0e953abfd5cdb1cd6c73,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4,},ImageRef:docker.io/kubernetesui/metrics-scraper@sha256:f0350dbe60f3787b16c4f5f484bf78937df4a8391f9eb99af122e49e2155b097,State:CONTAINER_RUNNING,CreatedAt:1605311298944045900,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c8b69c96c-tr9cr,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: e64cb49d-a4bd-46c7-b3db-edec824639fe,},Annotations:map[string]string{io.kubernetes.container.hash: 558135cc,io.kubernetes.container.por
ts: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008,PodSandboxId:bd016b3a26069df4139bcd83638273850a7240dc5186922d3c6942077fc1041b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2,},ImageRef:docker.io/kubernetesui/dashboard@sha256:0f9243b2dbcc9d631cd5cbdc950b6c4ae3ff5634a91f768d4a4b27f199626631,State:CONTAINER_RUNNING,CreatedAt:1605311298273524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-5ddb79bb9f-ndvzw,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: fb40b588-4533-4fa9-a315-47b33b420ed6,},Annotations:map[string]strin
g{io.kubernetes.container.hash: e27068b3,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2,PodSandboxId:582067d0e761e711e466bc37f0a1396c1f934cf1be6e638f4152e5f8988ab2a3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289,},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:5f02deeb1870b24a5d26141e310a429511a5668c8488afef8228fb3faef27ca8,State:CONTAINER_RUNNING,CreatedAt:1605311297276443998,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d54d7102-688e-4eb4-aa
53-2a59b7110ab4,},Annotations:map[string]string{io.kubernetes.container.hash: f1ad06fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7,PodSandboxId:771b1ff5051d5066abebb97620752a11a9d5628ffd7529cfb581317297f843e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f,},ImageRef:k8s.gcr.io/kube-proxy@sha256:f2f8c6a7bf3ea6ddcdeb39c4b79ac3f7d900ede7600df924bb525478ecbbc534,State:CONTAINER_RUNNING,CreatedAt:1605311295758016287,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7pmsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 644f9ac0-e218-41b1-aadd-518dce1505e1,},Annotations:map[string]string{io.kubernetes.container.
hash: dcc93b45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb552abd8bd7bd334b49afeb31ff82914f12d1f436062f09b1e4a97f87d2689,PodSandboxId:c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,},ImageRef:docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998,State:CONTAINER_RUNNING,CreatedAt:1605311291076398725,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d14afba-9944-433e-818d-c5969fd23efc,},Annotations:map[string]string{io.kubernetes.container.hash: fc4b11dc,io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135,PodSandboxId:59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c,},ImageRef:k8s.gcr.io/coredns@sha256:621c94aaeedd98c1ca3eb724dc0a430b43eab58c3199832dc8eafd423150018a,State:CONTAINER_RUNNING,CreatedAt:1605311286728482060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5d4dd4b4db-9nvsc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7678ab1d-33c2-4613-ab05-593cc3a77698,},Annotations:map[string]string{io.kubernetes.container.hash: 48c9a032,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769,PodSandboxId:0bf87146b75ec750196aceea739c2cbbdab2d2e1241d1e3940c72687039b4b33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,},ImageRef:k8s.gcr.io/etcd@sha256:2f37d055a1d6be8d75c56896a6ecfd5c597505141a8d3ad8e6a9838657a4c52e,State:CONTAINER_RUNNING,CreatedAt:1605311272796769093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f93bc1ca1318454e4f387d079d3ed3,},Annotations:map[string]string{io.kub
ernetes.container.hash: 5e1c6040,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4,PodSandboxId:2b89e7772256a9abb3de32f7f3078e54f5eea95620b36b794aaee76ce50e5d09,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367,},ImageRef:k8s.gcr.io/kube-scheduler@sha256:482ca815d16c723cc4e2a6d37e6d0aed9706dbdf4241b9e32b1a19aea9c99ce0,State:CONTAINER_RUNNING,CreatedAt:1605311270129780451,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d56135b6f61d5db3f635e70693e7224d,},Annotations:map[string]string{io.kubernetes.container.hash: f5e21dc5,io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8,PodSandboxId:00d34b55df0ae6348dd07e33d161f77d521ce2cc34d6b3d8322b0110f704c92b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264,},ImageRef:k8s.gcr.io/kube-apiserver@sha256:7424b172281ffbb3937b1fb53709a63190fb9badc29993b350c642cbd8f53a50,State:CONTAINER_RUNNING,CreatedAt:1605311269958130243,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b29d223d9edf898051121d1f2e3d54,},Annotations:map[string]string{io.kubernetes.container.hash: b4549cfc,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c,PodSandboxId:244887518e383f9f69ea00ec174f8907d5447722838ac8f3f39137302e075bcc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2,},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:edb80c790bea171018d7f26ec7dd5a4f15da007811dbb17cf29b1ce8fdb96b91,State:CONTAINER_RUNNING,CreatedAt:1605311269818807034,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-crio-20201113234030-7409,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b61122dbf61c0657d93f147ac231888c,},Annotations:map[string]string{io.kubernetes.container.hash: 7a6049d3,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5f4872a6-f547-4283-92e3-6b0aecb3dbd5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                        ATTEMPT             POD ID
	* 46b11ae2e59f5       86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4                                    About a minute ago   Running             dashboard-metrics-scraper   0                   18243ba7ebb0e
	* c452ced717e6c       503bc4b7440b9039d0a18858bba30906e25e2f690094b2b4f8735dfc3609cda2                                    About a minute ago   Running             kubernetes-dashboard        0                   bd016b3a26069
	* 855d24364ab05       bad58561c4be797bbea256940eea1d9967b7581d1103f9e4f03f32936c1ae289                                    About a minute ago   Running             storage-provisioner         0                   582067d0e761e
	* c35617cb7d28c       ae3d9889423ede337df3814baa77445e566597a5a882f3cdf933b4d9e0025f0f                                    About a minute ago   Running             kube-proxy                  0                   771b1ff5051d5
	* 7eb552abd8bd7       docker.io/library/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   About a minute ago   Running             busybox                     0                   c3155d201b16b
	* 402919d538146       eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c                                    About a minute ago   Running             coredns                     0                   59f33c7f0806c
	* d13de2f151851       2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d                                    About a minute ago   Running             etcd                        0                   0bf87146b75ec
	* fdae31c491365       78b4180ab00d0fb99b1be2b5ef92a4831ad07f00f27e6746828f374497d79367                                    About a minute ago   Running             kube-scheduler              0                   2b89e7772256a
	* 5b8f21e69e2c5       c500a024ff843278184e5454ff6ee040a106c867c5a0361886fd3057cace2264                                    About a minute ago   Running             kube-apiserver              0                   00d34b55df0ae
	* d65b395688c78       d2f090f2479fbf92c508100e0a6106b3516bb70421a465586661feb1494145a2                                    About a minute ago   Running             kube-controller-manager     0                   244887518e383
	* 
	* ==> coredns [402919d5381465bf52d673eb2473c96187c928ccc4675e27efce14fa0e6e0135] <==
	* .:53
	* 2020-11-13T23:44:46.733Z [INFO] CoreDNS-1.3.1
	* 2020-11-13T23:44:46.733Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	* CoreDNS-1.3.1
	* linux/amd64, go1.11.4, 6b56a9c
	* 2020-11-13T23:44:46.733Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
	* E1113 23:45:11.741845       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1113 23:45:11.747587       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1113 23:45:11.752962       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* .:53
	* 2020-11-13T23:48:11.933Z [INFO] CoreDNS-1.3.1
	* 2020-11-13T23:48:11.933Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	* CoreDNS-1.3.1
	* linux/amd64, go1.11.4, 6b56a9c
	* 2020-11-13T23:48:11.933Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
	* E1113 23:48:36.929755       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1113 23:48:36.930037       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* E1113 23:48:36.930119       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	* 
	* ==> describe nodes <==
	* Name:               crio-20201113234030-7409
	* Roles:              master
	* Labels:             beta.kubernetes.io/arch=amd64
	*                     beta.kubernetes.io/os=linux
	*                     kubernetes.io/arch=amd64
	*                     kubernetes.io/hostname=crio-20201113234030-7409
	*                     kubernetes.io/os=linux
	*                     minikube.k8s.io/commit=f1624ef53a2521d2c375e24d59fe2d2c53b4ded0
	*                     minikube.k8s.io/name=crio-20201113234030-7409
	*                     minikube.k8s.io/updated_at=2020_11_13T23_44_24_0700
	*                     minikube.k8s.io/version=v1.15.0
	*                     node-role.kubernetes.io/master=
	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	*                     node.alpha.kubernetes.io/ttl: 0
	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
	* CreationTimestamp:  Fri, 13 Nov 2020 23:44:17 +0000
	* Taints:             <none>
	* Unschedulable:      false
	* Conditions:
	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	*   ----             ------  -----------------                 ------------------                ------                       -------
	*   MemoryPressure   False   Fri, 13 Nov 2020 23:49:21 +0000   Fri, 13 Nov 2020 23:44:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	*   DiskPressure     False   Fri, 13 Nov 2020 23:49:21 +0000   Fri, 13 Nov 2020 23:44:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	*   PIDPressure      False   Fri, 13 Nov 2020 23:49:21 +0000   Fri, 13 Nov 2020 23:44:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	*   Ready            True    Fri, 13 Nov 2020 23:49:21 +0000   Fri, 13 Nov 2020 23:44:08 +0000   KubeletReady                 kubelet is posting ready status
	* Addresses:
	*   InternalIP:  192.168.39.90
	*   Hostname:    crio-20201113234030-7409
	* Capacity:
	*  cpu:                2
	*  ephemeral-storage:  16954224Ki
	*  hugepages-2Mi:      0
	*  memory:             2083920Ki
	*  pods:               110
	* Allocatable:
	*  cpu:                2
	*  ephemeral-storage:  16954224Ki
	*  hugepages-2Mi:      0
	*  memory:             2083920Ki
	*  pods:               110
	* System Info:
	*  Machine ID:                 f7c0699993ae41a787da051015c1dedd
	*  System UUID:                f7c06999-93ae-41a7-87da-051015c1dedd
	*  Boot ID:                    717f6206-1b5a-4ca0-b7f6-048357351ba2
	*  Kernel Version:             4.19.150
	*  OS Image:                   Buildroot 2020.02.7
	*  Operating System:           linux
	*  Architecture:               amd64
	*  Container Runtime Version:  cri-o://1.18.3
	*  Kubelet Version:            v1.15.7
	*  Kube-Proxy Version:         v1.15.7
	* PodCIDR:                     10.244.0.0/24
	* Non-terminated Pods:         (10 in total)
	*   Namespace                  Name                                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	*   ---------                  ----                                                ------------  ----------  ---------------  -------------  ---
	*   default                    busybox                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	*   kube-system                coredns-5d4dd4b4db-9nvsc                            100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m8s
	*   kube-system                etcd-crio-20201113234030-7409                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	*   kube-system                kube-apiserver-crio-20201113234030-7409             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	*   kube-system                kube-controller-manager-crio-20201113234030-7409    200m (10%)    0 (0%)      0 (0%)           0 (0%)         104s
	*   kube-system                kube-proxy-7pmsg                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	*   kube-system                kube-scheduler-crio-20201113234030-7409             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m28s
	*   kube-system                storage-provisioner                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	*   kubernetes-dashboard       dashboard-metrics-scraper-c8b69c96c-tr9cr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	*   kubernetes-dashboard       kubernetes-dashboard-5ddb79bb9f-ndvzw               0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	* Allocated resources:
	*   (Total limits may be over 100 percent, i.e., overcommitted.)
	*   Resource           Requests    Limits
	*   --------           --------    ------
	*   cpu                650m (32%)  0 (0%)
	*   memory             70Mi (3%)   170Mi (8%)
	*   ephemeral-storage  0 (0%)      0 (0%)
	* Events:
	*   Type    Reason                   Age                    From                                  Message
	*   ----    ------                   ----                   ----                                  -------
	*   Normal  NodeHasSufficientMemory  5m42s (x8 over 5m43s)  kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    5m42s (x7 over 5m43s)  kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     5m42s (x8 over 5m43s)  kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasSufficientPID
	*   Normal  Starting                 5m2s                   kube-proxy, crio-20201113234030-7409  Starting kube-proxy.
	*   Normal  Starting                 119s                   kubelet, crio-20201113234030-7409     Starting kubelet.
	*   Normal  NodeHasSufficientMemory  118s (x8 over 118s)    kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasSufficientMemory
	*   Normal  NodeHasNoDiskPressure    118s (x8 over 118s)    kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasNoDiskPressure
	*   Normal  NodeHasSufficientPID     118s (x7 over 118s)    kubelet, crio-20201113234030-7409     Node crio-20201113234030-7409 status is now: NodeHasSufficientPID
	*   Normal  NodeAllocatableEnforced  118s                   kubelet, crio-20201113234030-7409     Updated Node Allocatable limit across pods
	*   Normal  Starting                 88s                    kube-proxy, crio-20201113234030-7409  Starting kube-proxy.
	* 
	* ==> dmesg <==
	* [Nov13 23:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	* [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	* [  +0.156653] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	* [  +5.746668] Unstable clock detected, switching default tracing clock to "global"
	*               If you want to keep using the local clock, then add:
	*                 "trace_clock=local"
	*               on the kernel command line
	* [  +0.000051] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	* [  +5.433386] systemd-fstab-generator[1157]: Ignoring "noauto" for root device
	* [  +0.061982] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	* [  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	* [  +1.803370] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1714 comm=systemd-network
	* [  +1.042447] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	* [  +0.935130] vboxguest: loading out-of-tree module taints kernel.
	* [  +0.009214] vboxguest: PCI device not found, probably running on physical hardware.
	* [  +2.961942] systemd-fstab-generator[2050]: Ignoring "noauto" for root device
	* [Nov13 23:47] systemd-fstab-generator[3045]: Ignoring "noauto" for root device
	* [Nov13 23:48] systemd-fstab-generator[3494]: Ignoring "noauto" for root device
	* [  +1.258214] kauditd_printk_skb: 20 callbacks suppressed
	* [  +2.043611] tee (3884): /proc/3303/oom_adj is deprecated, please use /proc/3303/oom_score_adj instead.
	* [ +10.914197] kauditd_printk_skb: 20 callbacks suppressed
	* [ +16.316495] NFSD: Unable to end grace period: -110
	* [ +13.082057] kauditd_printk_skb: 71 callbacks suppressed
	* 
	* ==> etcd [d13de2f15185127ee7c466dfc7982f19d4eb72366cb889c8b55b95da0f48e769] <==
	* 2020-11-13 23:47:52.914771 I | raft: 8d381aaacda0b9bd became follower at term 2
	* 2020-11-13 23:47:52.914838 I | raft: newRaft 8d381aaacda0b9bd [peers: [], term: 2, commit: 498, applied: 0, lastindex: 498, lastterm: 2]
	* 2020-11-13 23:47:52.937184 W | auth: simple token is not cryptographically signed
	* 2020-11-13 23:47:52.945436 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
	* 2020-11-13 23:47:52.949875 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	* 2020-11-13 23:47:52.951114 I | etcdserver/membership: added member 8d381aaacda0b9bd [https://192.168.39.90:2380] to cluster 8cf3a1558a63fa9e
	* 2020-11-13 23:47:52.951336 N | etcdserver/membership: set the initial cluster version to 3.3
	* 2020-11-13 23:47:52.951429 I | etcdserver/api: enabled capabilities for version 3.3
	* 2020-11-13 23:47:52.954717 I | embed: listening for metrics on http://192.168.39.90:2381
	* 2020-11-13 23:47:52.956886 I | embed: listening for metrics on http://127.0.0.1:2381
	* 2020-11-13 23:47:54.615842 I | raft: 8d381aaacda0b9bd is starting a new election at term 2
	* 2020-11-13 23:47:54.615902 I | raft: 8d381aaacda0b9bd became candidate at term 3
	* 2020-11-13 23:47:54.615947 I | raft: 8d381aaacda0b9bd received MsgVoteResp from 8d381aaacda0b9bd at term 3
	* 2020-11-13 23:47:54.615964 I | raft: 8d381aaacda0b9bd became leader at term 3
	* 2020-11-13 23:47:54.615978 I | raft: raft.node: 8d381aaacda0b9bd elected leader 8d381aaacda0b9bd at term 3
	* 2020-11-13 23:47:54.616478 I | etcdserver: published {Name:crio-20201113234030-7409 ClientURLs:[https://192.168.39.90:2379]} to cluster 8cf3a1558a63fa9e
	* 2020-11-13 23:47:54.617391 I | embed: ready to serve client requests
	* 2020-11-13 23:47:54.617898 I | embed: ready to serve client requests
	* 2020-11-13 23:47:54.620053 I | embed: serving client requests on 127.0.0.1:2379
	* 2020-11-13 23:47:54.620114 I | embed: serving client requests on 192.168.39.90:2379
	* proto: no coders for int
	* proto: no encoder for ValueSize int [GetProperties]
	* 2020-11-13 23:48:04.035036 W | etcdserver: request "header:<ID:13383983152324138109 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" value_size:490 lease:4160611115469362065 >> failure:<>>" with result "size:16" took too long (211.056157ms) to execute
	* 2020-11-13 23:48:04.035324 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-crio-20201113234030-7409\" " with result "range_response_count:1 size:2169" took too long (182.371843ms) to execute
	* 2020-11-13 23:48:04.036068 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/deployment-controller\" " with result "range_response_count:1 size:260" took too long (150.713278ms) to execute
	* 
	* ==> kernel <==
	*  23:49:44 up 3 min,  0 users,  load average: 1.17, 0.88, 0.37
	* Linux crio-20201113234030-7409 4.19.150 #1 SMP Fri Nov 6 15:58:07 PST 2020 x86_64 GNU/Linux
	* PRETTY_NAME="Buildroot 2020.02.7"
	* 
	* ==> kube-apiserver [5b8f21e69e2c5c7995293d40698100bb2bf70c9384eca44295c32fbe60e414e8] <==
	* I1113 23:48:00.466541       1 controller.go:81] Starting OpenAPI AggregationController
	* I1113 23:48:00.481008       1 crdregistration_controller.go:112] Starting crd-autoregister controller
	* I1113 23:48:00.481290       1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
	* I1113 23:48:00.481443       1 controller.go:83] Starting OpenAPI controller
	* I1113 23:48:00.481469       1 customresource_discovery_controller.go:208] Starting DiscoveryController
	* I1113 23:48:00.481579       1 naming_controller.go:288] Starting NamingConditionController
	* I1113 23:48:00.481675       1 establishing_controller.go:73] Starting EstablishingController
	* I1113 23:48:00.481705       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	* I1113 23:48:00.711700       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	* I1113 23:48:00.753945       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	* E1113 23:48:00.763870       1 controller.go:148] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	* I1113 23:48:00.767829       1 cache.go:39] Caches are synced for autoregister controller
	* I1113 23:48:00.768707       1 cache.go:39] Caches are synced for AvailableConditionController controller
	* I1113 23:48:00.782379       1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
	* I1113 23:48:01.461948       1 controller.go:107] OpenAPI AggregationController: Processing item 
	* I1113 23:48:01.462117       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	* I1113 23:48:01.462141       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	* I1113 23:48:01.521694       1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
	* I1113 23:48:05.754875       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	* I1113 23:48:05.803393       1 controller.go:606] quota admission added evaluator for: deployments.apps
	* I1113 23:48:05.934325       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	* I1113 23:48:05.956757       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	* I1113 23:48:05.972011       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	* I1113 23:48:15.938177       1 controller.go:606] quota admission added evaluator for: endpoints
	* I1113 23:48:15.953919       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	* 
	* ==> kube-controller-manager [d65b395688c7844df79cc9b218e069111a0ef5b5e5dd0fef2bb5435b4eb8564c] <==
	* I1113 23:48:15.889421       1 controller_utils.go:1036] Caches are synced for job controller
	* I1113 23:48:15.889926       1 controller_utils.go:1036] Caches are synced for persistent volume controller
	* I1113 23:48:15.906493       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* I1113 23:48:15.920300       1 controller_utils.go:1036] Caches are synced for ReplicationController controller
	* I1113 23:48:15.927975       1 controller_utils.go:1036] Caches are synced for endpoint controller
	* I1113 23:48:15.936570       1 controller_utils.go:1036] Caches are synced for certificate controller
	* I1113 23:48:15.936758       1 controller_utils.go:1036] Caches are synced for stateful set controller
	* I1113 23:48:15.950485       1 controller_utils.go:1036] Caches are synced for deployment controller
	* I1113 23:48:15.966548       1 controller_utils.go:1036] Caches are synced for ReplicaSet controller
	* I1113 23:48:15.975700       1 controller_utils.go:1036] Caches are synced for HPA controller
	* I1113 23:48:15.978474       1 controller_utils.go:1036] Caches are synced for disruption controller
	* I1113 23:48:15.978727       1 disruption.go:338] Sending events to api server.
	* I1113 23:48:15.980309       1 controller_utils.go:1036] Caches are synced for daemon sets controller
	* I1113 23:48:15.986404       1 controller_utils.go:1036] Caches are synced for GC controller
	* I1113 23:48:15.994485       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"5a8a82df-e730-4a28-b914-5a9a8cee3b22", APIVersion:"apps/v1", ResourceVersion:"542", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5ddb79bb9f to 1
	* I1113 23:48:15.994740       1 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"7ab9e3df-6697-4500-b4c6-c73bd770cc9b", APIVersion:"apps/v1", ResourceVersion:"541", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-c8b69c96c to 1
	* I1113 23:48:16.024567       1 controller_utils.go:1036] Caches are synced for certificate controller
	* I1113 23:48:16.104046       1 controller_utils.go:1036] Caches are synced for attach detach controller
	* I1113 23:48:16.137162       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* I1113 23:48:16.160240       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5ddb79bb9f", UID:"a9a41991-d8b8-41d2-b875-a4a3b3ef1119", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5ddb79bb9f-ndvzw
	* I1113 23:48:16.174252       1 controller_utils.go:1036] Caches are synced for garbage collector controller
	* I1113 23:48:16.174276       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	* I1113 23:48:16.201676       1 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
	* I1113 23:48:16.212139       1 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-c8b69c96c", UID:"17bb9d23-9ea0-45db-8bed-33e607ac22e5", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-c8b69c96c-tr9cr
	* I1113 23:48:16.302483       1 controller_utils.go:1036] Caches are synced for resource quota controller
	* 
	* ==> kube-proxy [c35617cb7d28c41610467d3341ebd7c80f72c5c9d994138c0b6fd50a76fe6cf7] <==
	* I1113 23:44:42.089010       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
	* I1113 23:44:42.089479       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I1113 23:44:42.090218       1 conntrack.go:83] Setting conntrack hashsize to 32768
	* I1113 23:44:42.094532       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1113 23:44:42.095353       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1113 23:44:42.095907       1 config.go:187] Starting service config controller
	* I1113 23:44:42.096148       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
	* I1113 23:44:42.096323       1 config.go:96] Starting endpoints config controller
	* I1113 23:44:42.096437       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
	* I1113 23:44:42.212138       1 controller_utils.go:1036] Caches are synced for endpoints config controller
	* I1113 23:44:42.212654       1 controller_utils.go:1036] Caches are synced for service config controller
	* W1113 23:48:16.766866       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
	* I1113 23:48:16.799093       1 server_others.go:143] Using iptables Proxier.
	* I1113 23:48:16.802735       1 server.go:534] Version: v1.15.7
	* I1113 23:48:16.829550       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
	* I1113 23:48:16.830922       1 conntrack.go:52] Setting nf_conntrack_max to 131072
	* I1113 23:48:16.831892       1 conntrack.go:83] Setting conntrack hashsize to 32768
	* I1113 23:48:16.837234       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
	* I1113 23:48:16.837413       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
	* I1113 23:48:16.837743       1 config.go:187] Starting service config controller
	* I1113 23:48:16.837879       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
	* I1113 23:48:16.838264       1 config.go:96] Starting endpoints config controller
	* I1113 23:48:16.838317       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
	* I1113 23:48:16.939206       1 controller_utils.go:1036] Caches are synced for endpoints config controller
	* I1113 23:48:16.939320       1 controller_utils.go:1036] Caches are synced for service config controller
	* 
	* ==> kube-scheduler [fdae31c4913657c8ebfeb5ff077d9e1790fd9dc13032dc55d51508d679b9bdc4] <==
	* E1113 23:44:18.112180       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1113 23:44:18.113471       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1113 23:44:18.116039       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1113 23:44:18.116450       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1113 23:44:18.119542       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* I1113 23:47:52.832971       1 serving.go:319] Generated self-signed cert in-memory
	* W1113 23:47:53.416824       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	* W1113 23:47:53.416947       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	* W1113 23:47:53.416978       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	* I1113 23:47:53.428947       1 server.go:142] Version: v1.15.7
	* I1113 23:47:53.429095       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	* W1113 23:47:53.431157       1 authorization.go:47] Authorization is disabled
	* W1113 23:47:53.431248       1 authentication.go:55] Authentication is disabled
	* I1113 23:47:53.431296       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	* I1113 23:47:53.432572       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	* E1113 23:48:00.577976       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	* E1113 23:48:00.658731       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	* E1113 23:48:00.695981       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	* E1113 23:48:00.696447       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	* E1113 23:48:00.696584       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	* E1113 23:48:00.696844       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	* E1113 23:48:00.696998       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	* E1113 23:48:00.697255       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	* E1113 23:48:00.697564       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	* E1113 23:48:00.701744       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2020-11-13 23:46:27 UTC, end at Fri 2020-11-13 23:49:45 UTC. --
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.298975    3053 kuberuntime_manager.go:709] Failed to get pod sandbox status: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"; Skipping pod "coredns-5d4dd4b4db-9nvsc_kube-system(7678ab1d-33c2-4613-ab05-593cc3a77698)"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.299022    3053 pod_workers.go:190] Error syncing pod 7678ab1d-33c2-4613-ab05-593cc3a77698 ("coredns-5d4dd4b4db-9nvsc_kube-system(7678ab1d-33c2-4613-ab05-593cc3a77698)"), skipping: failed to SyncPod: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.330230    3053 remote_runtime.go:182] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.330302    3053 kuberuntime_sandbox.go:210] ListPodSandbox failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.330317    3053 generic.go:205] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.339587    3053 kuberuntime_manager.go:709] Failed to get pod sandbox status: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"; Skipping pod "busybox_default(5d14afba-9944-433e-818d-c5969fd23efc)"
	* Nov 13 23:48:04 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:04.339704    3053 pod_workers.go:190] Error syncing pod 5d14afba-9944-433e-818d-c5969fd23efc ("busybox_default(5d14afba-9944-433e-818d-c5969fd23efc)"), skipping: failed to SyncPod: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.331312    3053 remote_runtime.go:182] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.331383    3053 kuberuntime_sandbox.go:210] ListPodSandbox failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.331401    3053 generic.go:205] GenericPLEG: Unable to retrieve pods: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:05.625463    3053 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009ae070, CONNECTING
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:05.625490    3053 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009ae070, READY
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.932757    3053 remote_runtime.go:182] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.932841    3053 kuberuntime_sandbox.go:210] ListPodSandbox failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.932856    3053 kubelet_pods.go:1027] Error listing containers: &status.statusError{Code:14, Message:"all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory\"", Details:[]*any.Any(nil)}
	* Nov 13 23:48:05 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:05.932887    3053 kubelet.go:1977] Failed cleaning pods: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	* Nov 13 23:48:06 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:06.163589    3053 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009da200, CONNECTING
	* Nov 13 23:48:06 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:06.163802    3053 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0009da200, READY
	* Nov 13 23:48:16 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:16.249915    3053 reflector.go:125] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-j6zmt": Failed to list *v1.Secret: secrets "kubernetes-dashboard-token-j6zmt" is forbidden: User "system:node:crio-20201113234030-7409" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node "crio-20201113234030-7409" and this object
	* Nov 13 23:48:16 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:16.255893    3053 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/fb40b588-4533-4fa9-a315-47b33b420ed6-tmp-volume") pod "kubernetes-dashboard-5ddb79bb9f-ndvzw" (UID: "fb40b588-4533-4fa9-a315-47b33b420ed6")
	* Nov 13 23:48:16 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:16.256558    3053 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-j6zmt" (UniqueName: "kubernetes.io/secret/fb40b588-4533-4fa9-a315-47b33b420ed6-kubernetes-dashboard-token-j6zmt") pod "kubernetes-dashboard-5ddb79bb9f-ndvzw" (UID: "fb40b588-4533-4fa9-a315-47b33b420ed6")
	* Nov 13 23:48:16 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:16.357989    3053 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-j6zmt" (UniqueName: "kubernetes.io/secret/e64cb49d-a4bd-46c7-b3db-edec824639fe-kubernetes-dashboard-token-j6zmt") pod "dashboard-metrics-scraper-c8b69c96c-tr9cr" (UID: "e64cb49d-a4bd-46c7-b3db-edec824639fe")
	* Nov 13 23:48:16 crio-20201113234030-7409 kubelet[3053]: I1113 23:48:16.358337    3053 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/e64cb49d-a4bd-46c7-b3db-edec824639fe-tmp-volume") pod "dashboard-metrics-scraper-c8b69c96c-tr9cr" (UID: "e64cb49d-a4bd-46c7-b3db-edec824639fe")
	* Nov 13 23:48:46 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:46.104574    3053 manager.go:1084] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7678ab1d_33c2_4613_ab05_593cc3a77698.slice/crio-59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e.scope: Error finding container 59f33c7f0806cc925dca35d1c76c53bdac01f2abf5399528ae892c0063b1fb9e: Status 404 returned error &{%!s(*http.body=&{0xc0008a0100 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x737ab0) %!s(func() error=0x737a40)}
	* Nov 13 23:48:46 crio-20201113234030-7409 kubelet[3053]: E1113 23:48:46.107977    3053 manager.go:1084] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d14afba_9944_433e_818d_c5969fd23efc.slice/crio-c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1.scope: Error finding container c3155d201b16bb5ce1acf1d838da70ee810696e831b9b47c277302c0c6a715a1: Status 404 returned error &{%!s(*http.body=&{0xc000cedbc0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x737ab0) %!s(func() error=0x737a40)}
	* 
	* ==> kubernetes-dashboard [c452ced717e6c28fef3e562378b81031fde1f99151d134eb401858067ef99008] <==
	* 2020/11/13 23:48:18 Using namespace: kubernetes-dashboard
	* 2020/11/13 23:48:18 Using in-cluster config to connect to apiserver
	* 2020/11/13 23:48:18 Using secret token for csrf signing
	* 2020/11/13 23:48:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	* 2020/11/13 23:48:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	* 2020/11/13 23:48:18 Successful initial request to the apiserver, version: v1.15.7
	* 2020/11/13 23:48:18 Generating JWE encryption key
	* 2020/11/13 23:48:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	* 2020/11/13 23:48:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	* 2020/11/13 23:48:19 Initializing JWE encryption key from synchronized object
	* 2020/11/13 23:48:19 Creating in-cluster Sidecar client
	* 2020/11/13 23:48:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	* 2020/11/13 23:48:19 Serving insecurely on HTTP port: 9090
	* 2020/11/13 23:48:49 Successful request to sidecar
	* 2020/11/13 23:48:18 Starting overwatch
	* 
	* ==> storage-provisioner [855d24364ab052e95bdfa23928361aff4e4341cdc3cf32478ac98a3a7a834ce2] <==
	* I1113 23:44:44.303528       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1113 23:44:44.326430       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1113 23:44:44.327067       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79f37f59-638d-43b0-b081-00b1e98426a1", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crio-20201113234030-7409_65dbb526-704c-44b8-8b61-8819b921f252 became leader
	* I1113 23:44:44.328929       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_crio-20201113234030-7409_65dbb526-704c-44b8-8b61-8819b921f252!
	* I1113 23:44:44.437931       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_crio-20201113234030-7409_65dbb526-704c-44b8-8b61-8819b921f252!
	* I1113 23:48:17.551960       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
	* I1113 23:48:34.978092       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	* I1113 23:48:34.979330       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_crio-20201113234030-7409_5db1e5cb-3c36-4154-b0d2-ac07ed7d6edc!
	* I1113 23:48:34.999094       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79f37f59-638d-43b0-b081-00b1e98426a1", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' crio-20201113234030-7409_5db1e5cb-3c36-4154-b0d2-ac07ed7d6edc became leader
	* I1113 23:48:35.081477       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_crio-20201113234030-7409_5db1e5cb-3c36-4154-b0d2-ac07ed7d6edc!

                                                
                                                
-- /stdout --
** stderr ** 
	E1113 23:49:44.945218    3189 out.go:286] unable to execute * 2020-11-13 23:48:04.035036 W | etcdserver: request "header:<ID:13383983152324138109 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" value_size:490 lease:4160611115469362065 >> failure:<>>" with result "size:16" took too long (211.056157ms) to execute
	: html/template:* 2020-11-13 23:48:04.035036 W | etcdserver: request "header:<ID:13383983152324138109 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-crio-20201113234030-7409.164736181c8d0064\" value_size:490 lease:4160611115469362065 >> failure:<>>" with result "size:16" took too long (211.056157ms) to execute
	: "\"" in attribute name: " username:\\\"kube-apiserver-etcd-" - returning raw string.

                                                
                                                
** /stderr **
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201113234030-7409 -n crio-20201113234030-7409
helpers_test.go:255: (dbg) Run:  kubectl --context crio-20201113234030-7409 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: 
helpers_test.go:263: ======> post-mortem[TestStartStop/group/crio/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:266: (dbg) Run:  kubectl --context crio-20201113234030-7409 describe pod 
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context crio-20201113234030-7409 describe pod : exit status 1 (111.088561ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:268: kubectl --context crio-20201113234030-7409 describe pod : exit status 1
--- FAIL: TestStartStop/group/crio/serial/VerifyKubernetesImages (6.20s)

                                                
                                    

Test pass (162/172)

passed test Duration
TestDownloadOnly/crio/v1.13.0 15.76
TestDownloadOnly/crio/v1.19.4 7.42
TestDownloadOnly/crio/v1.20.0-beta.1 9.37
TestDownloadOnly/crio/DeleteAll 0.35
TestDownloadOnly/crio/DeleteAlwaysSucceeds 0.36
TestDownloadOnly/docker/v1.13.0 7.8
TestDownloadOnly/docker/v1.19.4 8.46
TestDownloadOnly/docker/v1.20.0-beta.1 5.16
TestDownloadOnly/docker/DeleteAll 0.36
TestDownloadOnly/docker/DeleteAlwaysSucceeds 0.36
TestDownloadOnly/containerd/v1.13.0 9.39
TestDownloadOnly/containerd/v1.19.4 18.14
TestDownloadOnly/containerd/v1.20.0-beta.1 10.65
TestDownloadOnly/containerd/DeleteAll 0.35
TestDownloadOnly/containerd/DeleteAlwaysSucceeds 0.35
TestOffline/group/docker 116.39
TestOffline/group/crio 131.48
TestOffline/group/containerd 103.44
TestAddons/parallel/Registry 20.7
TestAddons/parallel/Ingress 20.18
TestAddons/parallel/MetricsServer 7.6
TestAddons/parallel/HelmTiller 15.91
TestAddons/parallel/CSI 85.44
TestAddons/parallel/GCPAuth 47.88
TestCertOptions 102.4
TestDockerFlags 87.86
TestForceSystemdFlag 113.77
TestForceSystemdEnv 308.96
TestGvisorAddon 685.53
TestJSONOutput/start/parallel/DistinctCurrentSteps 0
TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
TestJSONOutputError 0.44
TestMultiNode/serial/FreshStart2Nodes 121.85
TestMultiNode/serial/AddNode 52.3
TestMultiNode/serial/StopNode 5.43
TestMultiNode/serial/StartAfterStop 29.83
TestMultiNode/serial/DeleteNode 2.39
TestMultiNode/serial/StopMultiNode 17.52
TestMultiNode/serial/RestartMultiNode 104.07
TestPreload 154.33
TestSkaffold 125.35
TestRunningBinaryUpgrade 268.52
TestStoppedBinaryUpgrade 440.69
TestKubernetesUpgrade 249.62
TestPause/serial/Start 116.54
TestPause/serial/SecondStartNoReconfiguration 18.24
TestPause/serial/Pause 1.8
TestPause/serial/VerifyStatus 0.45
TestPause/serial/Unpause 2.52
TestPause/serial/PauseAgain 2.2
TestPause/serial/DeletePaused 1.23
TestPause/serial/VerifyDeletedResources 0.6
TestNetworkPlugins/group/auto/Start 306.66
TestNetworkPlugins/group/kindnet/Start 275.37
TestNetworkPlugins/group/kindnet/ControllerPod 5.12
TestNetworkPlugins/group/auto/KubeletFlags 0.43
TestNetworkPlugins/group/auto/NetCatPod 16.12
TestNetworkPlugins/group/cilium/Start 769.96
TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
TestNetworkPlugins/group/kindnet/NetCatPod 16.5
TestNetworkPlugins/group/auto/DNS 0.7
TestNetworkPlugins/group/auto/Localhost 0.48
TestNetworkPlugins/group/auto/HairPin 5.52
TestNetworkPlugins/group/kindnet/DNS 0.65
TestNetworkPlugins/group/kindnet/Localhost 0.52
TestNetworkPlugins/group/kindnet/HairPin 0.75
TestNetworkPlugins/group/calico/Start 745.3
TestNetworkPlugins/group/custom-weave/Start 746.49
TestNetworkPlugins/group/false/Start 680.74
TestNetworkPlugins/group/calico/ControllerPod 5.07
TestNetworkPlugins/group/custom-weave/KubeletFlags 0.4
TestNetworkPlugins/group/cilium/ControllerPod 5.1
TestNetworkPlugins/group/custom-weave/NetCatPod 18.38
TestNetworkPlugins/group/false/KubeletFlags 0.37
TestNetworkPlugins/group/false/NetCatPod 17.54
TestNetworkPlugins/group/calico/KubeletFlags 0.35
TestNetworkPlugins/group/calico/NetCatPod 18.33
TestNetworkPlugins/group/cilium/KubeletFlags 0.46
TestNetworkPlugins/group/cilium/NetCatPod 19.57
TestNetworkPlugins/group/false/DNS 0.79
TestNetworkPlugins/group/false/Localhost 0.49
TestNetworkPlugins/group/false/HairPin 5.51
TestNetworkPlugins/group/enable-default-cni/Start 283.58
TestNetworkPlugins/group/calico/DNS 0.83
TestNetworkPlugins/group/calico/Localhost 0.51
TestNetworkPlugins/group/calico/HairPin 0.59
TestNetworkPlugins/group/cilium/DNS 0.82
TestNetworkPlugins/group/flannel/Start 278.43
TestNetworkPlugins/group/cilium/Localhost 0.45
TestNetworkPlugins/group/bridge/Start 276.41
TestNetworkPlugins/group/cilium/HairPin 0.49
TestNetworkPlugins/group/kubenet/Start 273.65
TestNetworkPlugins/group/kubenet/KubeletFlags 0.35
TestNetworkPlugins/group/kubenet/NetCatPod 18.63
TestNetworkPlugins/group/bridge/KubeletFlags 0.38
TestNetworkPlugins/group/bridge/NetCatPod 17.28
TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
TestNetworkPlugins/group/flannel/ControllerPod 5.1
TestNetworkPlugins/group/enable-default-cni/NetCatPod 16.17
TestNetworkPlugins/group/flannel/KubeletFlags 0.37
TestNetworkPlugins/group/flannel/NetCatPod 15.5
TestNetworkPlugins/group/enable-default-cni/DNS 0.64
TestNetworkPlugins/group/bridge/DNS 0.66
TestNetworkPlugins/group/enable-default-cni/Localhost 0.56
TestNetworkPlugins/group/kubenet/DNS 0.84
TestNetworkPlugins/group/bridge/Localhost 0.61
TestNetworkPlugins/group/enable-default-cni/HairPin 0.66
TestNetworkPlugins/group/bridge/HairPin 0.57
TestNetworkPlugins/group/kubenet/Localhost 0.49
TestNetworkPlugins/group/kubenet/HairPin 0.47
TestStartStop/group/old-k8s-version/serial/FirstStart 234.08
TestStartStop/group/crio/serial/FirstStart 319.87
TestStartStop/group/embed-certs/serial/FirstStart 235.79
TestNetworkPlugins/group/flannel/DNS 0.67
TestNetworkPlugins/group/flannel/Localhost 0.49
TestNetworkPlugins/group/flannel/HairPin 0.48
TestStartStop/group/newest-cni/serial/FirstStart 229.67
TestStartStop/group/old-k8s-version/serial/DeployApp 14.53
TestStartStop/group/newest-cni/serial/DeployApp 0
TestStartStop/group/newest-cni/serial/Stop 14.26
TestStartStop/group/embed-certs/serial/DeployApp 11.49
TestStartStop/group/embed-certs/serial/Stop 8.22
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.36
TestStartStop/group/old-k8s-version/serial/Stop 7.29
TestStartStop/group/newest-cni/serial/SecondStart 59.74
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
TestStartStop/group/old-k8s-version/serial/SecondStart 129.93
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
TestStartStop/group/embed-certs/serial/SecondStart 130.38
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.5
TestStartStop/group/newest-cni/serial/Pause 5.42
TestStartStop/group/containerd/serial/FirstStart 120.9
TestStartStop/group/crio/serial/DeployApp 14.47
TestStartStop/group/crio/serial/Stop 4.23
TestStartStop/group/crio/serial/EnableAddonAfterStop 0.25
TestStartStop/group/crio/serial/SecondStart 200.34
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.07
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.04
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.02
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.02
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.61
TestStartStop/group/old-k8s-version/serial/Pause 6.2
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.46
TestStartStop/group/embed-certs/serial/Pause 6.01
TestStartStop/group/containerd/serial/DeployApp 13.15
TestStartStop/group/containerd/serial/Stop 92.94
TestStartStop/group/crio/serial/UserAppExistsAfterStop 5.03
TestStartStop/group/containerd/serial/EnableAddonAfterStop 0.23
TestStartStop/group/containerd/serial/SecondStart 106.97
TestStartStop/group/crio/serial/AddonExistsAfterStop 5.02
TestStartStop/group/crio/serial/Pause 3.86
TestStartStop/group/containerd/serial/UserAppExistsAfterStop 164.02
TestStartStop/group/containerd/serial/AddonExistsAfterStop 5.02
TestStartStop/group/containerd/serial/VerifyKubernetesImages 0.29
x
+
TestDownloadOnly/crio/v1.13.0 (15.76s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.13.0
aaa_download_only_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p crio-20201113224455-7409 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=crio --driver=kvm2 
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20201113224455-7409 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=crio --driver=kvm2 : (15.754265214s)
--- PASS: TestDownloadOnly/crio/v1.13.0 (15.76s)

                                                
                                    
x
+
TestDownloadOnly/crio/v1.19.4 (7.42s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.19.4
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p crio-20201113224455-7409 --force --alsologtostderr --kubernetes-version=v1.19.4 --container-runtime=crio --driver=kvm2 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20201113224455-7409 --force --alsologtostderr --kubernetes-version=v1.19.4 --container-runtime=crio --driver=kvm2 : (7.415917788s)
--- PASS: TestDownloadOnly/crio/v1.19.4 (7.42s)

                                                
                                    
x
+
TestDownloadOnly/crio/v1.20.0-beta.1 (9.37s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/v1.20.0-beta.1
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p crio-20201113224455-7409 --force --alsologtostderr --kubernetes-version=v1.20.0-beta.1 --container-runtime=crio --driver=kvm2 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p crio-20201113224455-7409 --force --alsologtostderr --kubernetes-version=v1.20.0-beta.1 --container-runtime=crio --driver=kvm2 : (9.370008181s)
--- PASS: TestDownloadOnly/crio/v1.20.0-beta.1 (9.37s)

                                                
                                    
x
+
TestDownloadOnly/crio/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/DeleteAll
aaa_download_only_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/crio/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/crio/DeleteAlwaysSucceeds (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/crio/DeleteAlwaysSucceeds
aaa_download_only_test.go:145: (dbg) Run:  out/minikube-linux-amd64 delete -p crio-20201113224455-7409
--- PASS: TestDownloadOnly/crio/DeleteAlwaysSucceeds (0.36s)

                                                
                                    
x
+
TestDownloadOnly/docker/v1.13.0 (7.8s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.13.0
aaa_download_only_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p docker-20201113224528-7409 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20201113224528-7409 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=docker --driver=kvm2 : (7.798261892s)
--- PASS: TestDownloadOnly/docker/v1.13.0 (7.80s)

                                                
                                    
x
+
TestDownloadOnly/docker/v1.19.4 (8.46s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.19.4
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p docker-20201113224528-7409 --force --alsologtostderr --kubernetes-version=v1.19.4 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20201113224528-7409 --force --alsologtostderr --kubernetes-version=v1.19.4 --container-runtime=docker --driver=kvm2 : (8.461902893s)
--- PASS: TestDownloadOnly/docker/v1.19.4 (8.46s)

                                                
                                    
x
+
TestDownloadOnly/docker/v1.20.0-beta.1 (5.16s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/v1.20.0-beta.1
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p docker-20201113224528-7409 --force --alsologtostderr --kubernetes-version=v1.20.0-beta.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p docker-20201113224528-7409 --force --alsologtostderr --kubernetes-version=v1.20.0-beta.1 --container-runtime=docker --driver=kvm2 : (5.161277622s)
--- PASS: TestDownloadOnly/docker/v1.20.0-beta.1 (5.16s)

                                                
                                    
x
+
TestDownloadOnly/docker/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/DeleteAll
aaa_download_only_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/docker/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/docker/DeleteAlwaysSucceeds (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/docker/DeleteAlwaysSucceeds
aaa_download_only_test.go:145: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-20201113224528-7409
--- PASS: TestDownloadOnly/docker/DeleteAlwaysSucceeds (0.36s)

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.13.0 (9.39s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.13.0
aaa_download_only_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p containerd-20201113224551-7409 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=containerd --driver=kvm2 
aaa_download_only_test.go:65: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20201113224551-7409 --force --alsologtostderr --kubernetes-version=v1.13.0 --container-runtime=containerd --driver=kvm2 : (9.392576844s)
--- PASS: TestDownloadOnly/containerd/v1.13.0 (9.39s)

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.19.4 (18.14s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.19.4
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p containerd-20201113224551-7409 --force --alsologtostderr --kubernetes-version=v1.19.4 --container-runtime=containerd --driver=kvm2 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20201113224551-7409 --force --alsologtostderr --kubernetes-version=v1.19.4 --container-runtime=containerd --driver=kvm2 : (18.138690227s)
--- PASS: TestDownloadOnly/containerd/v1.19.4 (18.14s)

                                                
                                    
x
+
TestDownloadOnly/containerd/v1.20.0-beta.1 (10.65s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/v1.20.0-beta.1
aaa_download_only_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p containerd-20201113224551-7409 --force --alsologtostderr --kubernetes-version=v1.20.0-beta.1 --container-runtime=containerd --driver=kvm2 
aaa_download_only_test.go:67: (dbg) Done: out/minikube-linux-amd64 start --download-only -p containerd-20201113224551-7409 --force --alsologtostderr --kubernetes-version=v1.20.0-beta.1 --container-runtime=containerd --driver=kvm2 : (10.6513004s)
--- PASS: TestDownloadOnly/containerd/v1.20.0-beta.1 (10.65s)

                                                
                                    
x
+
TestDownloadOnly/containerd/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/DeleteAll
aaa_download_only_test.go:133: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/containerd/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/containerd/DeleteAlwaysSucceeds (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/containerd/DeleteAlwaysSucceeds
aaa_download_only_test.go:145: (dbg) Run:  out/minikube-linux-amd64 delete -p containerd-20201113224551-7409
--- PASS: TestDownloadOnly/containerd/DeleteAlwaysSucceeds (0.35s)

                                                
                                    
x
+
TestOffline/group/docker (116.39s)

                                                
                                                
=== RUN   TestOffline/group/docker
=== PAUSE TestOffline/group/docker

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/docker

                                                
                                                
=== CONT  TestOffline/group/docker
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20201113224630-7409 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=kvm2 

                                                
                                                
=== CONT  TestOffline/group/docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20201113224630-7409 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime docker --driver=kvm2 : (1m55.190248719s)
helpers_test.go:171: Cleaning up "offline-docker-20201113224630-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20201113224630-7409
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20201113224630-7409: (1.198379394s)
--- PASS: TestOffline/group/docker (116.39s)

                                                
                                    
x
+
TestOffline/group/crio (131.48s)

                                                
                                                
=== RUN   TestOffline/group/crio
=== PAUSE TestOffline/group/crio

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/crio
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-20201113224630-7409 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=kvm2 

                                                
                                                
=== CONT  TestOffline/group/crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-20201113224630-7409 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime crio --driver=kvm2 : (2m10.338926772s)
helpers_test.go:171: Cleaning up "offline-crio-20201113224630-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-20201113224630-7409
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-20201113224630-7409: (1.144734985s)
--- PASS: TestOffline/group/crio (131.48s)

                                                
                                    
x
+
TestOffline/group/containerd (103.44s)

                                                
                                                
=== RUN   TestOffline/group/containerd
=== PAUSE TestOffline/group/containerd

                                                
                                                

                                                
                                                
=== CONT  TestOffline/group/containerd

                                                
                                                
=== CONT  TestOffline/group/containerd
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20201113224630-7409 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=kvm2 

                                                
                                                
=== CONT  TestOffline/group/containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20201113224630-7409 --alsologtostderr -v=1 --memory=2000 --wait=true --container-runtime containerd --driver=kvm2 : (1m42.354022983s)
helpers_test.go:171: Cleaning up "offline-containerd-20201113224630-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20201113224630-7409
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20201113224630-7409: (1.087125071s)
--- PASS: TestOffline/group/containerd (103.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:199: registry stabilized in 38.833927ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:201: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:333: "registry-n57s7" [eb4dc20d-1311-48b0-8727-14edbe8c0cc4] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:201: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.063126851s
addons_test.go:204: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:333: "registry-proxy-z55jb" [4d81b3b4-0635-4842-90ad-6a7edc43b8ed] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:204: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.023927401s
addons_test.go:209: (dbg) Run:  kubectl --context addons-20201113224841-7409 delete po -l run=registry-test --now
addons_test.go:214: (dbg) Run:  kubectl --context addons-20201113224841-7409 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:214: (dbg) Done: kubectl --context addons-20201113224841-7409 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.164425422s)
addons_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201113224841-7409 ip
2020/11/13 22:53:07 [DEBUG] GET http://192.168.39.17:5000
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable registry --alsologtostderr -v=1
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable registry --alsologtostderr -v=1: (1.128881087s)
--- PASS: TestAddons/parallel/Registry (20.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:126: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "kube-system" ...
helpers_test.go:333: "ingress-nginx-admission-create-n7gwc" [9a2e339d-a1b5-46eb-8bee-1b8cf3c4e5c4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:126: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 10.451256ms
addons_test.go:131: (dbg) Run:  kubectl --context addons-20201113224841-7409 replace --force -f testdata/nginx-ing.yaml
addons_test.go:136: kubectl --context addons-20201113224841-7409 replace --force -f testdata/nginx-ing.yaml: unexpected stderr: Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
(may be temproary)
addons_test.go:145: (dbg) Run:  kubectl --context addons-20201113224841-7409 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:150: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:333: "nginx" [e79d575e-724e-4884-bdcd-88d9c8cccf20] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:333: "nginx" [e79d575e-724e-4884-bdcd-88d9c8cccf20] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:333: "nginx" [e79d575e-724e-4884-bdcd-88d9c8cccf20] Running
addons_test.go:150: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.095369793s
addons_test.go:160: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201113224841-7409 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable ingress --alsologtostderr -v=1
addons_test.go:181: (dbg) Done: out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable ingress --alsologtostderr -v=1: (2.96258976s)
--- PASS: TestAddons/parallel/Ingress (20.18s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:275: metrics-server stabilized in 41.049771ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:277: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:333: "metrics-server-d9b576748-bmtgr" [cccb8b43-a273-4b46-97a6-371dda0f8666] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:277: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.05326911s
addons_test.go:283: (dbg) Run:  kubectl --context addons-20201113224841-7409 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:301: (dbg) Done: out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable metrics-server --alsologtostderr -v=1: (2.363902881s)
--- PASS: TestAddons/parallel/MetricsServer (7.60s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.91s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:319: tiller-deploy stabilized in 40.409452ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:321: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:333: "tiller-deploy-565984b594-szrrk" [fe5141d6-922f-4cd3-bdd9-c5601815d6f7] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:321: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.048709552s
addons_test.go:336: (dbg) Run:  kubectl --context addons-20201113224841-7409 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:336: (dbg) Done: kubectl --context addons-20201113224841-7409 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (9.893853392s)
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (85.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:434: csi-hostpath-driver pods stabilized in 16.669315ms
addons_test.go:437: (dbg) Run:  kubectl --context addons-20201113224841-7409 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:442: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:383: (dbg) Run:  kubectl --context addons-20201113224841-7409 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:383: (dbg) Run:  kubectl --context addons-20201113224841-7409 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:447: (dbg) Run:  kubectl --context addons-20201113224841-7409 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:452: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:333: "task-pv-pod" [99e22e61-cbbc-46f9-8578-2c4515e26c72] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:333: "task-pv-pod" [99e22e61-cbbc-46f9-8578-2c4515e26c72] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:333: "task-pv-pod" [99e22e61-cbbc-46f9-8578-2c4515e26c72] Running
addons_test.go:452: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 38.049192463s
addons_test.go:457: (dbg) Run:  kubectl --context addons-20201113224841-7409 create -f testdata/csi-hostpath-driver/snapshotclass.yaml
addons_test.go:463: (dbg) Run:  kubectl --context addons-20201113224841-7409 create -f testdata/csi-hostpath-driver/snapshot.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:468: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201113224841-7409 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:416: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201113224841-7409 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201113224841-7409 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201113224841-7409 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:408: (dbg) Run:  kubectl --context addons-20201113224841-7409 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:473: (dbg) Run:  kubectl --context addons-20201113224841-7409 delete pod task-pv-pod
addons_test.go:473: (dbg) Done: kubectl --context addons-20201113224841-7409 delete pod task-pv-pod: (10.836672333s)
addons_test.go:479: (dbg) Run:  kubectl --context addons-20201113224841-7409 delete pvc hpvc
addons_test.go:485: (dbg) Run:  kubectl --context addons-20201113224841-7409 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:490: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:383: (dbg) Run:  kubectl --context addons-20201113224841-7409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:383: (dbg) Run:  kubectl --context addons-20201113224841-7409 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:495: (dbg) Run:  kubectl --context addons-20201113224841-7409 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:500: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:333: "task-pv-pod-restore" [bef040a3-c45f-4eaf-b46b-e9ea4f61e2d3] Pending
helpers_test.go:333: "task-pv-pod-restore" [bef040a3-c45f-4eaf-b46b-e9ea4f61e2d3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:333: "task-pv-pod-restore" [bef040a3-c45f-4eaf-b46b-e9ea4f61e2d3] Running
addons_test.go:500: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 18.037386628s
addons_test.go:505: (dbg) Run:  kubectl --context addons-20201113224841-7409 delete pod task-pv-pod-restore
addons_test.go:505: (dbg) Done: kubectl --context addons-20201113224841-7409 delete pod task-pv-pod-restore: (2.650600363s)
addons_test.go:509: (dbg) Run:  kubectl --context addons-20201113224841-7409 delete pvc hpvc-restore
addons_test.go:513: (dbg) Run:  kubectl --context addons-20201113224841-7409 delete volumesnapshot new-snapshot-demo
addons_test.go:517: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:517: (dbg) Done: out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable csi-hostpath-driver --alsologtostderr -v=1: (5.910256581s)
addons_test.go:521: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (85.44s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (47.88s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:531: (dbg) Run:  kubectl --context addons-20201113224841-7409 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:537: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [8f65987b-10ab-4734-abda-fa34a18c79f2] Pending
helpers_test.go:333: "busybox" [8f65987b-10ab-4734-abda-fa34a18c79f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:333: "busybox" [8f65987b-10ab-4734-abda-fa34a18c79f2] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:537: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 15.061482959s
addons_test.go:543: (dbg) Run:  kubectl --context addons-20201113224841-7409 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:543: (dbg) Done: kubectl --context addons-20201113224841-7409 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS": (1.177959217s)
addons_test.go:555: (dbg) Run:  kubectl --context addons-20201113224841-7409 exec busybox -- /bin/sh -c "cat /google-app-creds.json"

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:555: (dbg) Done: kubectl --context addons-20201113224841-7409 exec busybox -- /bin/sh -c "cat /google-app-creds.json": (1.188062464s)
addons_test.go:578: (dbg) Run:  kubectl --context addons-20201113224841-7409 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:589: (dbg) Run:  out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20201113224841-7409 addons disable gcp-auth --alsologtostderr -v=1: (29.230719995s)
--- PASS: TestAddons/parallel/GCPAuth (47.88s)

                                                
                                    
x
+
TestCertOptions (102.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20201113230948-7409 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20201113230948-7409 --memory=1900 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m40.510577692s)
cert_options_test.go:57: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20201113230948-7409 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:72: (dbg) Run:  kubectl --context cert-options-20201113230948-7409 config view
helpers_test.go:171: Cleaning up "cert-options-20201113230948-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20201113230948-7409
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20201113230948-7409: (1.320091378s)
--- PASS: TestCertOptions (102.40s)

                                                
                                    
x
+
TestDockerFlags (87.86s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20201113231540-7409 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20201113231540-7409 --cache-images=false --memory=1800 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m25.112978182s)
docker_test.go:46: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20201113231540-7409 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:57: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20201113231540-7409 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:171: Cleaning up "docker-flags-20201113231540-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20201113231540-7409

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20201113231540-7409: (1.811150652s)
--- PASS: TestDockerFlags (87.86s)

                                                
                                    
x
+
TestForceSystemdFlag (113.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20201113231545-7409 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=kvm2 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20201113231545-7409 --memory=1800 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m50.513287871s)
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20201113231545-7409 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:171: Cleaning up "force-systemd-flag-20201113231545-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20201113231545-7409
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20201113231545-7409: (2.781633376s)
--- PASS: TestForceSystemdFlag (113.77s)

                                                
                                    
x
+
TestForceSystemdEnv (308.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20201113231708-7409 --memory=1800 --alsologtostderr -v=5 --driver=kvm2 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20201113231708-7409 --memory=1800 --alsologtostderr -v=5 --driver=kvm2 : (5m7.203310864s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20201113231708-7409 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:171: Cleaning up "force-systemd-env-20201113231708-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20201113231708-7409

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20201113231708-7409: (1.206014251s)
--- PASS: TestForceSystemdEnv (308.96s)

                                                
                                    
x
+
TestGvisorAddon (685.53s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-20201113231221-7409 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-20201113231221-7409 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m36.130183639s)
gvisor_addon_test.go:57: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-20201113231221-7409 cache add gcr.io/k8s-minikube/gvisor-addon:2

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:57: (dbg) Done: out/minikube-linux-amd64 -p gvisor-20201113231221-7409 cache add gcr.io/k8s-minikube/gvisor-addon:2: (4m11.394900173s)
gvisor_addon_test.go:62: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-20201113231221-7409 addons enable gvisor
gvisor_addon_test.go:62: (dbg) Done: out/minikube-linux-amd64 -p gvisor-20201113231221-7409 addons enable gvisor: (6.35104144s)
gvisor_addon_test.go:67: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:333: "gvisor" [16547e4a-d197-4c8a-93d1-00d5af65361f] Running
gvisor_addon_test.go:67: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.068758702s
gvisor_addon_test.go:72: (dbg) Run:  kubectl --context gvisor-20201113231221-7409 replace --force -f testdata/nginx-untrusted.yaml
gvisor_addon_test.go:77: (dbg) Run:  kubectl --context gvisor-20201113231221-7409 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:82: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:333: "nginx-untrusted" [35b91856-52e6-45b2-bae2-42ef3a3e1128] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:333: "nginx-untrusted" [35b91856-52e6-45b2-bae2-42ef3a3e1128] Running
gvisor_addon_test.go:82: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 28.025082019s
gvisor_addon_test.go:85: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:333: "nginx-gvisor" [b915338a-08d6-4904-ac62-9931871ba192] Running
gvisor_addon_test.go:85: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.015771082s
gvisor_addon_test.go:90: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-20201113231221-7409
gvisor_addon_test.go:90: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-20201113231221-7409: (1m34.083794931s)
gvisor_addon_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-20201113231221-7409 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-20201113231221-7409 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m33.389236236s)
gvisor_addon_test.go:99: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:333: "gvisor" [16547e4a-d197-4c8a-93d1-00d5af65361f] Running
gvisor_addon_test.go:99: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.050329227s
gvisor_addon_test.go:102: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:333: "nginx-untrusted" [35b91856-52e6-45b2-bae2-42ef3a3e1128] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:333: "nginx-untrusted" [35b91856-52e6-45b2-bae2-42ef3a3e1128] Running
gvisor_addon_test.go:102: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 32.029882892s
gvisor_addon_test.go:105: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:333: "nginx-gvisor" [b915338a-08d6-4904-ac62-9931871ba192] Running
gvisor_addon_test.go:105: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.034438098s
helpers_test.go:171: Cleaning up "gvisor-20201113231221-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-20201113231221-7409
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-20201113231221-7409: (2.743490823s)
--- PASS: TestGvisorAddon (685.53s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutputError (0.44s)

                                                
                                                
=== RUN   TestJSONOutputError
json_output_test.go:134: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20201113225738-7409 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:134: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20201113225738-7409 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (119.389581ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20201113225738-7409] minikube v1.15.0 on Debian 9.13","name":"Initial Minikube Setup","totalsteps":"12"},"datacontenttype":"application/json","id":"8736bbb5-2d9f-41ba-8f7f-3387d22a8cc6","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/kubeconfig"},"datacontenttype":"application/json","id":"7bb8f5d1-1d87-4df3-92b9-031da62be7ed","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"afac1610-563e-4fb7-a54c-8b839f3b7407","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube"},"datacontenttype":"application/json","id":"d9964581-9d80-4982-a18f-b7f245d0936d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=9698"},"datacontenttype":"application/json","id":"6f6aabbe-d17d-4ca0-a057-65cccd5a62ac","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"9987f6c1-4031-4d3c-9556-841bbc170ead","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:171: Cleaning up "json-output-error-20201113225738-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20201113225738-7409
--- PASS: TestJSONOutputError (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (121.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:68: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20201113225739-7409 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:68: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20201113225739-7409 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m1.163375335s)
multinode_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (121.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20201113225739-7409 -v 3 --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20201113225739-7409 -v 3 --alsologtostderr: (51.39120348s)
multinode_test.go:98: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (5.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 node stop m03
multinode_test.go:114: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201113225739-7409 node stop m03: (4.212162244s)
multinode_test.go:120: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 status
multinode_test.go:120: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201113225739-7409 status: exit status 7 (611.646149ms)

                                                
                                                
-- stdout --
	multinode-20201113225739-7409
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20201113225739-7409-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20201113225739-7409-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 status --alsologtostderr
multinode_test.go:127: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201113225739-7409 status --alsologtostderr: exit status 7 (601.957003ms)

                                                
                                                
-- stdout --
	multinode-20201113225739-7409
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20201113225739-7409-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20201113225739-7409-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1113 23:00:38.447611   12568 out.go:185] Setting OutFile to fd 1 ...
	I1113 23:00:38.448037   12568 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1113 23:00:38.448052   12568 out.go:198] Setting ErrFile to fd 2...
	I1113 23:00:38.448056   12568 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1113 23:00:38.448183   12568 root.go:279] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube/bin
	I1113 23:00:38.448415   12568 out.go:192] Setting JSON to false
	I1113 23:00:38.448442   12568 mustload.go:66] Loading cluster: multinode-20201113225739-7409
	I1113 23:00:38.448855   12568 status.go:238] checking status of multinode-20201113225739-7409 ...
	I1113 23:00:38.449359   12568 main.go:119] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1113 23:00:38.449428   12568 main.go:119] libmachine: Launching plugin server for driver kvm2
	I1113 23:00:38.467359   12568 main.go:119] libmachine: Plugin server listening at address 127.0.0.1:44543
	I1113 23:00:38.468096   12568 main.go:119] libmachine: () Calling .GetVersion
	I1113 23:00:38.469011   12568 main.go:119] libmachine: Using API Version  1
	I1113 23:00:38.469042   12568 main.go:119] libmachine: () Calling .SetConfigRaw
	I1113 23:00:38.469587   12568 main.go:119] libmachine: () Calling .GetMachineName
	I1113 23:00:38.470072   12568 main.go:119] libmachine: (multinode-20201113225739-7409) Calling .GetState
	I1113 23:00:38.477939   12568 status.go:313] multinode-20201113225739-7409 host status = "Running" (err=<nil>)
	I1113 23:00:38.477972   12568 host.go:66] Checking if "multinode-20201113225739-7409" exists ...
	I1113 23:00:38.478551   12568 main.go:119] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1113 23:00:38.478688   12568 main.go:119] libmachine: Launching plugin server for driver kvm2
	I1113 23:00:38.498448   12568 main.go:119] libmachine: Plugin server listening at address 127.0.0.1:38775
	I1113 23:00:38.499321   12568 main.go:119] libmachine: () Calling .GetVersion
	I1113 23:00:38.500195   12568 main.go:119] libmachine: Using API Version  1
	I1113 23:00:38.500244   12568 main.go:119] libmachine: () Calling .SetConfigRaw
	I1113 23:00:38.500737   12568 main.go:119] libmachine: () Calling .GetMachineName
	I1113 23:00:38.501244   12568 main.go:119] libmachine: (multinode-20201113225739-7409) Calling .GetIP
	I1113 23:00:38.510474   12568 host.go:66] Checking if "multinode-20201113225739-7409" exists ...
	I1113 23:00:38.510957   12568 main.go:119] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1113 23:00:38.511056   12568 main.go:119] libmachine: Launching plugin server for driver kvm2
	I1113 23:00:38.529645   12568 main.go:119] libmachine: Plugin server listening at address 127.0.0.1:42979
	I1113 23:00:38.530529   12568 main.go:119] libmachine: () Calling .GetVersion
	I1113 23:00:38.531272   12568 main.go:119] libmachine: Using API Version  1
	I1113 23:00:38.531303   12568 main.go:119] libmachine: () Calling .SetConfigRaw
	I1113 23:00:38.531964   12568 main.go:119] libmachine: () Calling .GetMachineName
	I1113 23:00:38.532325   12568 main.go:119] libmachine: (multinode-20201113225739-7409) Calling .DriverName
	I1113 23:00:38.532601   12568 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1113 23:00:38.532725   12568 main.go:119] libmachine: (multinode-20201113225739-7409) Calling .GetSSHHostname
	I1113 23:00:38.543399   12568 main.go:119] libmachine: (multinode-20201113225739-7409) Calling .GetSSHPort
	I1113 23:00:38.543684   12568 main.go:119] libmachine: (multinode-20201113225739-7409) Calling .GetSSHKeyPath
	I1113 23:00:38.544122   12568 main.go:119] libmachine: (multinode-20201113225739-7409) Calling .GetSSHUsername
	I1113 23:00:38.544440   12568 sshutil.go:45] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube/machines/multinode-20201113225739-7409/id_rsa Username:docker}
	I1113 23:00:38.667110   12568 ssh_runner.go:148] Run: systemctl --version
	I1113 23:00:38.677169   12568 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
	I1113 23:00:38.694554   12568 kubeconfig.go:93] found "multinode-20201113225739-7409" server: "https://192.168.39.165:8443"
	I1113 23:00:38.694603   12568 api_server.go:146] Checking apiserver status ...
	I1113 23:00:38.694641   12568 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1113 23:00:38.710987   12568 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/3355/cgroup
	I1113 23:00:38.721772   12568 api_server.go:162] apiserver freezer: "2:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12a9843998fddd80ee45285dfbd40d2b.slice/docker-79e4fc37bdbb3a0f65ce9e5177ac300b4b4fab40d0221448c091eea38d46806c.scope"
	I1113 23:00:38.721915   12568 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12a9843998fddd80ee45285dfbd40d2b.slice/docker-79e4fc37bdbb3a0f65ce9e5177ac300b4b4fab40d0221448c091eea38d46806c.scope/freezer.state
	I1113 23:00:38.731796   12568 api_server.go:184] freezer state: "THAWED"
	I1113 23:00:38.731883   12568 api_server.go:221] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I1113 23:00:38.744174   12568 api_server.go:241] https://192.168.39.165:8443/healthz returned 200:
	ok
	I1113 23:00:38.744217   12568 status.go:388] multinode-20201113225739-7409 apiserver status = Running (err=<nil>)
	I1113 23:00:38.744233   12568 status.go:240] multinode-20201113225739-7409 status: &{Name:multinode-20201113225739-7409 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false}
	I1113 23:00:38.744290   12568 status.go:238] checking status of multinode-20201113225739-7409-m02 ...
	I1113 23:00:38.745122   12568 main.go:119] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1113 23:00:38.745475   12568 main.go:119] libmachine: Launching plugin server for driver kvm2
	I1113 23:00:38.761754   12568 main.go:119] libmachine: Plugin server listening at address 127.0.0.1:37687
	I1113 23:00:38.762590   12568 main.go:119] libmachine: () Calling .GetVersion
	I1113 23:00:38.763302   12568 main.go:119] libmachine: Using API Version  1
	I1113 23:00:38.763328   12568 main.go:119] libmachine: () Calling .SetConfigRaw
	I1113 23:00:38.763907   12568 main.go:119] libmachine: () Calling .GetMachineName
	I1113 23:00:38.764316   12568 main.go:119] libmachine: (multinode-20201113225739-7409-m02) Calling .GetState
	I1113 23:00:38.771743   12568 status.go:313] multinode-20201113225739-7409-m02 host status = "Running" (err=<nil>)
	I1113 23:00:38.771780   12568 host.go:66] Checking if "multinode-20201113225739-7409-m02" exists ...
	I1113 23:00:38.772201   12568 main.go:119] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1113 23:00:38.772265   12568 main.go:119] libmachine: Launching plugin server for driver kvm2
	I1113 23:00:38.788381   12568 main.go:119] libmachine: Plugin server listening at address 127.0.0.1:38103
	I1113 23:00:38.789123   12568 main.go:119] libmachine: () Calling .GetVersion
	I1113 23:00:38.789762   12568 main.go:119] libmachine: Using API Version  1
	I1113 23:00:38.789806   12568 main.go:119] libmachine: () Calling .SetConfigRaw
	I1113 23:00:38.790385   12568 main.go:119] libmachine: () Calling .GetMachineName
	I1113 23:00:38.790677   12568 main.go:119] libmachine: (multinode-20201113225739-7409-m02) Calling .GetIP
	I1113 23:00:38.799634   12568 host.go:66] Checking if "multinode-20201113225739-7409-m02" exists ...
	I1113 23:00:38.800063   12568 main.go:119] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1113 23:00:38.800121   12568 main.go:119] libmachine: Launching plugin server for driver kvm2
	I1113 23:00:38.817579   12568 main.go:119] libmachine: Plugin server listening at address 127.0.0.1:33915
	I1113 23:00:38.818228   12568 main.go:119] libmachine: () Calling .GetVersion
	I1113 23:00:38.818977   12568 main.go:119] libmachine: Using API Version  1
	I1113 23:00:38.819002   12568 main.go:119] libmachine: () Calling .SetConfigRaw
	I1113 23:00:38.819831   12568 main.go:119] libmachine: () Calling .GetMachineName
	I1113 23:00:38.820248   12568 main.go:119] libmachine: (multinode-20201113225739-7409-m02) Calling .DriverName
	I1113 23:00:38.820720   12568 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1113 23:00:38.820763   12568 main.go:119] libmachine: (multinode-20201113225739-7409-m02) Calling .GetSSHHostname
	I1113 23:00:38.832576   12568 main.go:119] libmachine: (multinode-20201113225739-7409-m02) Calling .GetSSHPort
	I1113 23:00:38.833024   12568 main.go:119] libmachine: (multinode-20201113225739-7409-m02) Calling .GetSSHKeyPath
	I1113 23:00:38.833729   12568 main.go:119] libmachine: (multinode-20201113225739-7409-m02) Calling .GetSSHUsername
	I1113 23:00:38.834093   12568 sshutil.go:45] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube/machines/multinode-20201113225739-7409-m02/id_rsa Username:docker}
	I1113 23:00:38.930824   12568 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
	I1113 23:00:38.944000   12568 status.go:240] multinode-20201113225739-7409-m02 status: &{Name:multinode-20201113225739-7409-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true}
	I1113 23:00:38.944046   12568 status.go:238] checking status of multinode-20201113225739-7409-m03 ...
	I1113 23:00:38.944426   12568 main.go:119] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1113 23:00:38.944476   12568 main.go:119] libmachine: Launching plugin server for driver kvm2
	I1113 23:00:38.961957   12568 main.go:119] libmachine: Plugin server listening at address 127.0.0.1:46287
	I1113 23:00:38.962701   12568 main.go:119] libmachine: () Calling .GetVersion
	I1113 23:00:38.963371   12568 main.go:119] libmachine: Using API Version  1
	I1113 23:00:38.963412   12568 main.go:119] libmachine: () Calling .SetConfigRaw
	I1113 23:00:38.963964   12568 main.go:119] libmachine: () Calling .GetMachineName
	I1113 23:00:38.964346   12568 main.go:119] libmachine: (multinode-20201113225739-7409-m03) Calling .GetState
	I1113 23:00:38.970569   12568 status.go:313] multinode-20201113225739-7409-m03 host status = "Stopped" (err=<nil>)
	I1113 23:00:38.970597   12568 status.go:326] host is not running, skipping remaining checks
	I1113 23:00:38.970607   12568 status.go:240] multinode-20201113225739-7409-m03 status: &{Name:multinode-20201113225739-7409-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (5.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 node start m03 --alsologtostderr
multinode_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201113225739-7409 node start m03 --alsologtostderr: (28.717054065s)
multinode_test.go:164: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 status
multinode_test.go:178: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 node delete m03
multinode_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201113225739-7409 node delete m03: (1.624688396s)
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 status --alsologtostderr
multinode_test.go:295: (dbg) Run:  kubectl get nodes
multinode_test.go:303: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (17.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:186: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 stop
multinode_test.go:186: (dbg) Done: out/minikube-linux-amd64 -p multinode-20201113225739-7409 stop: (17.246624354s)
multinode_test.go:192: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 status
multinode_test.go:192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201113225739-7409 status: exit status 7 (131.174391ms)

                                                
                                                
-- stdout --
	multinode-20201113225739-7409
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20201113225739-7409-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 status --alsologtostderr
multinode_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20201113225739-7409 status --alsologtostderr: exit status 7 (137.884249ms)

                                                
                                                
-- stdout --
	multinode-20201113225739-7409
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20201113225739-7409-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1113 23:01:28.645731   13060 out.go:185] Setting OutFile to fd 1 ...
	I1113 23:01:28.646042   13060 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1113 23:01:28.646057   13060 out.go:198] Setting ErrFile to fd 2...
	I1113 23:01:28.646062   13060 out.go:232] TERM=,COLORTERM=, which probably does not support color
	I1113 23:01:28.646181   13060 root.go:279] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube/bin
	I1113 23:01:28.646415   13060 out.go:192] Setting JSON to false
	I1113 23:01:28.646438   13060 mustload.go:66] Loading cluster: multinode-20201113225739-7409
	I1113 23:01:28.646734   13060 status.go:238] checking status of multinode-20201113225739-7409 ...
	I1113 23:01:28.647282   13060 main.go:119] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1113 23:01:28.647368   13060 main.go:119] libmachine: Launching plugin server for driver kvm2
	I1113 23:01:28.665985   13060 main.go:119] libmachine: Plugin server listening at address 127.0.0.1:37539
	I1113 23:01:28.666934   13060 main.go:119] libmachine: () Calling .GetVersion
	I1113 23:01:28.667895   13060 main.go:119] libmachine: Using API Version  1
	I1113 23:01:28.667934   13060 main.go:119] libmachine: () Calling .SetConfigRaw
	I1113 23:01:28.668461   13060 main.go:119] libmachine: () Calling .GetMachineName
	I1113 23:01:28.668891   13060 main.go:119] libmachine: (multinode-20201113225739-7409) Calling .GetState
	I1113 23:01:28.676599   13060 status.go:313] multinode-20201113225739-7409 host status = "Stopped" (err=<nil>)
	I1113 23:01:28.676629   13060 status.go:326] host is not running, skipping remaining checks
	I1113 23:01:28.676636   13060 status.go:240] multinode-20201113225739-7409 status: &{Name:multinode-20201113225739-7409 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false}
	I1113 23:01:28.676659   13060 status.go:238] checking status of multinode-20201113225739-7409-m02 ...
	I1113 23:01:28.677257   13060 main.go:119] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1113 23:01:28.677313   13060 main.go:119] libmachine: Launching plugin server for driver kvm2
	I1113 23:01:28.694685   13060 main.go:119] libmachine: Plugin server listening at address 127.0.0.1:44803
	I1113 23:01:28.695592   13060 main.go:119] libmachine: () Calling .GetVersion
	I1113 23:01:28.696327   13060 main.go:119] libmachine: Using API Version  1
	I1113 23:01:28.696368   13060 main.go:119] libmachine: () Calling .SetConfigRaw
	I1113 23:01:28.696880   13060 main.go:119] libmachine: () Calling .GetMachineName
	I1113 23:01:28.697261   13060 main.go:119] libmachine: (multinode-20201113225739-7409-m02) Calling .GetState
	I1113 23:01:28.703496   13060 status.go:313] multinode-20201113225739-7409-m02 host status = "Stopped" (err=<nil>)
	I1113 23:01:28.703519   13060 status.go:326] host is not running, skipping remaining checks
	I1113 23:01:28.703526   13060 status.go:240] multinode-20201113225739-7409-m02 status: &{Name:multinode-20201113225739-7409-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (17.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (104.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20201113225739-7409 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:225: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20201113225739-7409 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m43.219510799s)
multinode_test.go:231: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20201113225739-7409 status --alsologtostderr
multinode_test.go:245: (dbg) Run:  kubectl get nodes
multinode_test.go:253: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (104.07s)

                                                
                                    
x
+
TestPreload (154.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20201113230314-7409 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.17.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20201113230314-7409 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.17.0: (1m45.110343221s)
preload_test.go:50: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20201113230314-7409 -- docker pull busybox
preload_test.go:50: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20201113230314-7409 -- docker pull busybox: (2.30171482s)
preload_test.go:60: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20201113230314-7409 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --kubernetes-version=v1.17.3
preload_test.go:60: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20201113230314-7409 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --kubernetes-version=v1.17.3: (45.065744584s)
preload_test.go:64: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20201113230314-7409 -- docker images
helpers_test.go:171: Cleaning up "test-preload-20201113230314-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20201113230314-7409
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20201113230314-7409: (1.371076646s)
--- PASS: TestPreload (154.33s)

                                                
                                    
x
+
TestSkaffold (125.35s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:53: (dbg) Run:  /tmp/skaffold.exe052388170 version
skaffold_test.go:57: skaffold version: v1.16.0
skaffold_test.go:60: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20201113230743-7409 --memory=2600 --driver=kvm2 
skaffold_test.go:60: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20201113230743-7409 --memory=2600 --driver=kvm2 : (1m1.81532089s)
skaffold_test.go:73: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:97: (dbg) Run:  /tmp/skaffold.exe052388170 run --minikube-profile skaffold-20201113230743-7409 --kube-context skaffold-20201113230743-7409 --status-check=true --port-forward=false
skaffold_test.go:97: (dbg) Done: /tmp/skaffold.exe052388170 run --minikube-profile skaffold-20201113230743-7409 --kube-context skaffold-20201113230743-7409 --status-check=true --port-forward=false: (51.070595488s)
skaffold_test.go:103: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:333: "leeroy-app-66bbc789b-zthq2" [77996d1b-98a3-4827-b9c1-df97fba765fd] Running
skaffold_test.go:103: (dbg) TestSkaffold: app=leeroy-app healthy within 5.028500331s
skaffold_test.go:106: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:333: "leeroy-web-86f95c648-tb6z6" [e61be6ff-f04e-4458-9682-e2eb6d98af9b] Running
skaffold_test.go:106: (dbg) TestSkaffold: app=leeroy-web healthy within 5.01199803s
helpers_test.go:171: Cleaning up "skaffold-20201113230743-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20201113230743-7409
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20201113230743-7409: (1.295954404s)
--- PASS: TestSkaffold (125.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (268.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:95: (dbg) Run:  /tmp/minikube-v1.6.2.904540411.exe start -p running-upgrade-20201113230948-7409 --memory=2200 --vm-driver=kvm2 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:95: (dbg) Done: /tmp/minikube-v1.6.2.904540411.exe start -p running-upgrade-20201113230948-7409 --memory=2200 --vm-driver=kvm2 : (2m59.782876905s)
version_upgrade_test.go:105: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20201113230948-7409 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:105: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20201113230948-7409 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m26.955851396s)
helpers_test.go:171: Cleaning up "running-upgrade-20201113230948-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20201113230948-7409
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20201113230948-7409: (1.320676045s)
--- PASS: TestRunningBinaryUpgrade (268.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (440.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:142: (dbg) Run:  /tmp/minikube-v1.0.0.159648012.exe start -p stopped-upgrade-20201113230948-7409 --memory=2200 --vm-driver=kvm2 
> docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s    > docker-machine-driver-kvm2: 118.56 KiB / 48.57 MiB [>_____] 0.24% ? p/s ?    > docker-machine-driver-kvm2: 1.18 MiB / 48.57 MiB [>_______] 2.43% ? p/s ?    > docker-machine-driver-kvm2: 5.32 MiB / 48.57 MiB [>______] 10.95% ? p/s ?    > docker-machine-driver-kvm2: 14.21 MiB / 48.57 MiB  29.25% 23.49 MiB p/s E    > docker-machine-driver-kvm2: 23.10 MiB / 48.57 MiB  47.56% 23.49 MiB p/s E    > docker-machine-driver-kvm2: 32.05 MiB / 48.57 MiB  65.99% 23.49 MiB p/s E    > docker-machine-driver-kvm2: 40.94 MiB / 48.57 MiB  84.29% 24.85 MiB p/s E    > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB  100.00% 35.30 MiB p/s     > docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s    > docker-machine-driver-kvm2: 118.56 KiB / 48.57 MiB [>_____] 0.24% ? p/s ?    > docker-machine-driver-kvm2: 1.23 MiB / 48.57 MiB [>_______] 2.53% ? p/s ?    > docker-machine-driver-kvm2: 6.78 MiB / 48.57 MiB [>______] 13.96% ? p/
s ?    > docker-machine-driver-kvm2: 13.77 MiB / 48.57 MiB  28.34% 22.75 MiB p/s E    > docker-machine-driver-kvm2: 18.72 MiB / 48.57 MiB  38.54% 22.75 MiB p/s E    > docker-machine-driver-kvm2: 25.80 MiB / 48.57 MiB  53.11% 22.75 MiB p/s E    > docker-machine-driver-kvm2: 32.14 MiB / 48.57 MiB  66.17% 23.26 MiB p/s E    > docker-machine-driver-kvm2: 34.64 MiB / 48.57 MiB  71.32% 23.26 MiB p/s E    > docker-machine-driver-kvm2: 35.20 MiB / 48.57 MiB  72.48% 23.26 MiB p/s E    > docker-machine-driver-kvm2: 37.72 MiB / 48.57 MiB  77.66% 22.36 MiB p/s E    > docker-machine-driver-kvm2: 40.06 MiB / 48.57 MiB  82.48% 22.36 MiB p/s E    > docker-machine-driver-kvm2: 42.53 MiB / 48.57 MiB  87.56% 22.36 MiB p/s E    > docker-machine-driver-kvm2: 45.10 MiB / 48.57 MiB  92.84% 21.71 MiB p/s E    > docker-machine-driver-kvm2: 47.77 MiB / 48.57 MiB  98.34% 21.71 MiB p/s E    > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB  100.00% 18.53 MiB p/s --- PASS: TestKVMDriverInstallOrUpdate (8.75s)

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:142: (dbg) Non-zero exit: /tmp/minikube-v1.0.0.159648012.exe start -p stopped-upgrade-20201113230948-7409 --memory=2200 --vm-driver=kvm2 : exit status 70 (2m46.074095857s)

                                                
                                                
-- stdout --
	o   minikube v1.0.0 on linux (amd64)
	$   Downloading Kubernetes v1.14.0 images in the background ...
	>   Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	@   Downloading Minikube ISO ...
	
 0 B / 142.88 MB    0.00%
 8.00 MB / 142.88 MB    5.60% 3s
 31.53 MB / 142.88 MB   22.07% 1s
 53.76 MB / 142.88 MB   37.62% 0s
 54.23 MB / 142.88 MB   37.96% 1s
 56.40 MB / 142.88 MB   39.47% 2s
 59.01 MB / 142.88 MB   41.30% 2s
 60.33 MB / 142.88 MB   42.22% 2s
 60.59 MB / 142.88 MB   42.41% 3s
 61.79 MB / 142.88 MB   43.25% 3s
 64.71 MB / 142.88 MB   45.29% 3s
 67.24 MB / 142.88 MB   47.06% 3s
 70.85 MB / 142.88 MB   49.59% 3s
 85.14 MB / 142.88 MB   59.59% 2s
 110.58 MB / 142.88 MB   77.39% 1s
 136.27 MB / 142.88 MB   95.38% 0s
 142.88 MB / 142.88 MB  100.00% 0s
	-   "stopped-upgrade-20201113230948-7409" IP address is 192.168.39.198
	-   Configuring Docker as the container runtime ...
	-   Version of container runtime is 18.06.2-ce
	:   Waiting for image downloads to complete ...
	-   Preparing Kubernetes environment ...
	@   Downloading kubeadm v1.14.0
	@   Downloading kubelet v1.14.0
	-   Pulling images required by Kubernetes v1.14.0 ...
	-   Launching Kubernetes v1.14.0 using kubeadm ... 
	:   Waiting for pods:

                                                
                                                
-- /stdout --
** stderr ** 
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	2020/11/13 23:09:49 Unable to read "/home/jenkins/.docker/config.json": open /home/jenkins/.docker/config.json: no such file or directory
	2020/11/13 23:09:49 No matching credentials were found, falling back on anonymous
	
	!   Error starting cluster: wait: k8s client: Error creating kubeConfig: invalid configuration: no configuration has been provided
	
	*   Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
	-   https://github.com/kubernetes/minikube/issues/new

                                                
                                                
** /stderr **
version_upgrade_test.go:142: (dbg) Run:  /tmp/minikube-v1.0.0.159648012.exe start -p stopped-upgrade-20201113230948-7409 --memory=2200 --vm-driver=kvm2 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:142: (dbg) Done: /tmp/minikube-v1.0.0.159648012.exe start -p stopped-upgrade-20201113230948-7409 --memory=2200 --vm-driver=kvm2 : (1m5.864090672s)
version_upgrade_test.go:151: (dbg) Run:  /tmp/minikube-v1.0.0.159648012.exe -p stopped-upgrade-20201113230948-7409 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:151: (dbg) Done: /tmp/minikube-v1.0.0.159648012.exe -p stopped-upgrade-20201113230948-7409 stop: (1m43.945643352s)
version_upgrade_test.go:157: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20201113230948-7409 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:157: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20201113230948-7409 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m40.973263261s)
helpers_test.go:171: Cleaning up "stopped-upgrade-20201113230948-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p stopped-upgrade-20201113230948-7409

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p stopped-upgrade-20201113230948-7409: (1.876383448s)
--- PASS: TestStoppedBinaryUpgrade (440.69s)

                                                
                                    
x
+
TestKubernetesUpgrade (249.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:172: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201113231131-7409 --memory=2200 --kubernetes-version=v1.13.0 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:172: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20201113231131-7409 --memory=2200 --kubernetes-version=v1.13.0 --alsologtostderr -v=1 --driver=kvm2 : (1m46.047016746s)
version_upgrade_test.go:177: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20201113231131-7409
version_upgrade_test.go:177: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20201113231131-7409: (7.264807573s)
version_upgrade_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20201113231131-7409 status --format={{.Host}}
version_upgrade_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20201113231131-7409 status --format={{.Host}}: exit status 7 (98.100169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:184: status error: exit status 7 (may be ok)
version_upgrade_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201113231131-7409 --memory=2200 --kubernetes-version=v1.20.0-beta.1 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20201113231131-7409 --memory=2200 --kubernetes-version=v1.20.0-beta.1 --alsologtostderr -v=1 --driver=kvm2 : (1m31.880479444s)
version_upgrade_test.go:198: (dbg) Run:  kubectl --context kubernetes-upgrade-20201113231131-7409 version --output=json
version_upgrade_test.go:217: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201113231131-7409 --memory=2200 --kubernetes-version=v1.13.0 --driver=kvm2 
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20201113231131-7409 --memory=2200 --kubernetes-version=v1.13.0 --driver=kvm2 : exit status 106 (242.449045ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20201113231131-7409] minikube v1.15.0 on Debian 9.13
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube
	  - MINIKUBE_LOCATION=9698
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.20.0-beta.1 cluster to v1.13.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.13.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20201113231131-7409
	    minikube start -p kubernetes-upgrade-20201113231131-7409 --kubernetes-version=v1.13.0
	    
	    2) Create a second cluster with Kubernetes 1.13.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20201113231131-74092 --kubernetes-version=v1.13.0
	    
	    3) Use the existing cluster at version Kubernetes 1.20.0-beta.1, by running:
	    
	    minikube start -p kubernetes-upgrade-20201113231131-7409 --kubernetes-version=v1.20.0-beta.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:223: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20201113231131-7409 --memory=2200 --kubernetes-version=v1.20.0-beta.1 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:225: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20201113231131-7409 --memory=2200 --kubernetes-version=v1.20.0-beta.1 --alsologtostderr -v=1 --driver=kvm2 : (42.369194659s)
helpers_test.go:171: Cleaning up "kubernetes-upgrade-20201113231131-7409" profile ...
helpers_test.go:172: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20201113231131-7409

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20201113231131-7409: (1.590184604s)
--- PASS: TestKubernetesUpgrade (249.62s)

                                                
                                    
x
+
TestPause/serial/Start (116.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20201113230957-7409 --memory=1800 --install-addons=false --wait=all --driver=kvm2 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p pause-20201113230957-7409 --memory=1800 --install-addons=false --wait=all --driver=kvm2 : (1m56.544475214s)
--- PASS: TestPause/serial/Start (116.54s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (18.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:87: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20201113230957-7409 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:87: (dbg) Done: out/minikube-linux-amd64 start -p pause-20201113230957-7409 --alsologtostderr -v=1 --driver=kvm2 : (18.215534712s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (18.24s)

                                                
                                    
x
+
TestPause/serial/Pause (1.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:104: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20201113230957-7409 --alsologtostderr -v=5
pause_test.go:104: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20201113230957-7409 --alsologtostderr -v=5: (1.795282946s)
--- PASS: TestPause/serial/Pause (1.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:75: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20201113230957-7409 --output=json --layout=cluster
status_test.go:75: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20201113230957-7409 --output=json --layout=cluster: exit status 2 (445.892448ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20201113230957-7409","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.15.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":""}},"Nodes":[{"Name":"pause-20201113230957-7409","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (2.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:114: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20201113230957-7409 --alsologtostderr -v=5
pause_test.go:114: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-20201113230957-7409 --alsologtostderr -v=5: (2.52251123s)
--- PASS: TestPause/serial/Unpause (2.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (2.2s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:104: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20201113230957-7409 --alsologtostderr -v=5
pause_test.go:104: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20201113230957-7409 --alsologtostderr -v=5: (2.199207356s)
--- PASS: TestPause/serial/PauseAgain (2.20s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20201113230957-7409 --alsologtostderr -v=5
pause_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20201113230957-7409 --alsologtostderr -v=5: (1.233868539s)
--- PASS: TestPause/serial/DeletePaused (1.23s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.6s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:134: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (306.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20201113231709-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p auto-20201113231709-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --driver=kvm2 : (5m6.661265716s)
--- PASS: TestNetworkPlugins/group/auto/Start (306.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (275.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20201113231739-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=kindnet --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20201113231739-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=kindnet --driver=kvm2 : (4m35.372734433s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (275.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:333: "kindnet-8pqqr" [2cda7e1c-f684-403e-aceb-b605312d8010] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.104578841s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20201113231709-7409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context auto-20201113231709-7409 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-8z9bv" [f6768524-a96a-43c4-8caf-9168cbea8b32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-8z9bv" [f6768524-a96a-43c4-8caf-9168cbea8b32] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.028688773s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (769.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20201113232217-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=cilium --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20201113232217-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=cilium --driver=kvm2 : (12m49.962416399s)
--- PASS: TestNetworkPlugins/group/cilium/Start (769.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20201113231739-7409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (16.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context kindnet-20201113231739-7409 replace --force -f testdata/netcat-deployment.yaml
net_test.go:125: (dbg) Done: kubectl --context kindnet-20201113231739-7409 replace --force -f testdata/netcat-deployment.yaml: (1.001527431s)
net_test.go:139: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-lrfr2" [a87cd2e1-63ab-461b-8e8d-2c2e5da36b16] Pending
helpers_test.go:333: "netcat-66fbc655d5-lrfr2" [a87cd2e1-63ab-461b-8e8d-2c2e5da36b16] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-lrfr2" [a87cd2e1-63ab-461b-8e8d-2c2e5da36b16] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.035593764s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (16.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:156: (dbg) Run:  kubectl --context auto-20201113231709-7409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:175: (dbg) Run:  kubectl --context auto-20201113231709-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:188: (dbg) Run:  kubectl --context auto-20201113231709-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:188: (dbg) Non-zero exit: kubectl --context auto-20201113231709-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.519753845s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:156: (dbg) Run:  kubectl --context kindnet-20201113231739-7409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:175: (dbg) Run:  kubectl --context kindnet-20201113231739-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20201113231739-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (745.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20201113232240-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=calico --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p calico-20201113232240-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=calico --driver=kvm2 : (12m25.303345076s)
--- PASS: TestNetworkPlugins/group/calico/Start (745.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (746.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20201113232240-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=testdata/weavenet.yaml --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20201113232240-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=testdata/weavenet.yaml --driver=kvm2 : (12m26.487718097s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (746.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (680.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p false-20201113232346-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=false --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p false-20201113232346-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=false --driver=kvm2 : (11m20.740281165s)
--- PASS: TestNetworkPlugins/group/false/Start (680.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:333: "calico-node-7zb2b" [ec76ce4e-d627-41c1-9790-9d93543aac72] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.067675109s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20201113232240-7409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
helpers_test.go:333: "cilium-hpwzc" [ef96a6bd-e1d9-462f-9a55-013e75ba4a22] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.091819833s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (18.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context custom-weave-20201113232240-7409 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-x9nql" [56985aa3-fdb1-4a3b-9ac2-65e691dc092b] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-x9nql" [56985aa3-fdb1-4a3b-9ac2-65e691dc092b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-x9nql" [56985aa3-fdb1-4a3b-9ac2-65e691dc092b] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 17.100450441s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (18.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20201113232346-7409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (17.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context false-20201113232346-7409 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-7vmhf" [e6cd1369-dfd8-44a2-84f1-c1f09000307d] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-7vmhf" [e6cd1369-dfd8-44a2-84f1-c1f09000307d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-7vmhf" [e6cd1369-dfd8-44a2-84f1-c1f09000307d] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 16.034240436s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (17.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20201113232240-7409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (18.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context calico-20201113232240-7409 replace --force -f testdata/netcat-deployment.yaml
net_test.go:139: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-lzfk4" [05054fdc-0b96-4b82-aa6f-bc4acd4a65a0] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-lzfk4" [05054fdc-0b96-4b82-aa6f-bc4acd4a65a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-lzfk4" [05054fdc-0b96-4b82-aa6f-bc4acd4a65a0] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 17.060247459s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (18.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20201113232217-7409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (19.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context cilium-20201113232217-7409 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:125: (dbg) Done: kubectl --context cilium-20201113232217-7409 replace --force -f testdata/netcat-deployment.yaml: (1.206055247s)
net_test.go:139: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-xndh9" [1032458d-65c9-487b-b939-ed69419513ee] Pending
helpers_test.go:333: "netcat-66fbc655d5-xndh9" [1032458d-65c9-487b-b939-ed69419513ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-xndh9" [1032458d-65c9-487b-b939-ed69419513ee] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 18.049302904s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (19.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:156: (dbg) Run:  kubectl --context false-20201113232346-7409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:175: (dbg) Run:  kubectl --context false-20201113232346-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:188: (dbg) Run:  kubectl --context false-20201113232346-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/HairPin
net_test.go:188: (dbg) Non-zero exit: kubectl --context false-20201113232346-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.513648765s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (283.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20201113233527-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --enable-default-cni=true --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20201113233527-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --enable-default-cni=true --driver=kvm2 : (4m43.577392255s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (283.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:156: (dbg) Run:  kubectl --context calico-20201113232240-7409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:175: (dbg) Run:  kubectl --context calico-20201113232240-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:188: (dbg) Run:  kubectl --context calico-20201113232240-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:156: (dbg) Run:  kubectl --context cilium-20201113232217-7409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (278.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-20201113233532-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=flannel --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p flannel-20201113233532-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=flannel --driver=kvm2 : (4m38.434754184s)
--- PASS: TestNetworkPlugins/group/flannel/Start (278.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:175: (dbg) Run:  kubectl --context cilium-20201113232217-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (276.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20201113233533-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=bridge --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20201113233533-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --cni=bridge --driver=kvm2 : (4m36.414692084s)
--- PASS: TestNetworkPlugins/group/bridge/Start (276.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:188: (dbg) Run:  kubectl --context cilium-20201113232217-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (273.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20201113233535-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --network-plugin=kubenet --driver=kvm2 
net_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20201113233535-7409 --memory=1800 --alsologtostderr --wait=true --wait-timeout=25m --network-plugin=kubenet --driver=kvm2 : (4m33.646654291s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (273.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20201113233535-7409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (18.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context kubenet-20201113233535-7409 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:125: (dbg) Done: kubectl --context kubenet-20201113233535-7409 replace --force -f testdata/netcat-deployment.yaml: (1.053368556s)

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-x9dqv" [b2ca486d-da2a-4dfe-b109-082f350ce4f4] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-x9dqv" [b2ca486d-da2a-4dfe-b109-082f350ce4f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-x9dqv" [b2ca486d-da2a-4dfe-b109-082f350ce4f4] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 17.051630915s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (18.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20201113233533-7409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (17.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context bridge-20201113233533-7409 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-7snzd" [3736dce5-059d-4ace-9ff8-2652a1b57ae4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-7snzd" [3736dce5-059d-4ace-9ff8-2652a1b57ae4] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 16.029792271s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (17.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20201113233527-7409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/ControllerPod
helpers_test.go:333: "kube-flannel-ds-amd64-j29dw" [4a9f4039-fe86-4d4f-81dc-0b38458f73fc] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:88: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.099539488s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context enable-default-cni-20201113233527-7409 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-g6w4q" [161101db-1fca-42f1-a957-98bfdcf74abf] Pending
helpers_test.go:333: "netcat-66fbc655d5-g6w4q" [161101db-1fca-42f1-a957-98bfdcf74abf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-g6w4q" [161101db-1fca-42f1-a957-98bfdcf74abf] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.068370132s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:102: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-20201113233532-7409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:125: (dbg) Run:  kubectl --context flannel-20201113233532-7409 replace --force -f testdata/netcat-deployment.yaml
net_test.go:125: (dbg) Done: kubectl --context flannel-20201113233532-7409 replace --force -f testdata/netcat-deployment.yaml: (1.216390266s)
net_test.go:139: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:333: "netcat-66fbc655d5-2fsfm" [54e21aaa-8358-4352-99b8-addb05e450ee] Pending
helpers_test.go:333: "netcat-66fbc655d5-2fsfm" [54e21aaa-8358-4352-99b8-addb05e450ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
helpers_test.go:333: "netcat-66fbc655d5-2fsfm" [54e21aaa-8358-4352-99b8-addb05e450ee] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:139: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.015360984s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:156: (dbg) Run:  kubectl --context enable-default-cni-20201113233527-7409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:156: (dbg) Run:  kubectl --context bridge-20201113233533-7409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-20201113233527-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:156: (dbg) Run:  kubectl --context kubenet-20201113233535-7409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:175: (dbg) Run:  kubectl --context bridge-20201113233533-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20201113233527-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:188: (dbg) Run:  kubectl --context bridge-20201113233533-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:175: (dbg) Run:  kubectl --context kubenet-20201113233535-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20201113233535-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (234.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20201113234030-7409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=kvm2  --kubernetes-version=v1.13.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20201113234030-7409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=kvm2  --kubernetes-version=v1.13.0: (3m54.07856554s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (234.08s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/FirstStart (319.87s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p crio-20201113234030-7409 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=kvm2  --kubernetes-version=v1.15.7

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p crio-20201113234030-7409 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=kvm2  --kubernetes-version=v1.15.7: (5m19.871963448s)
--- PASS: TestStartStop/group/crio/serial/FirstStart (319.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (235.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20201113234031-7409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.19.4

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20201113234031-7409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.19.4: (3m55.785748616s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (235.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:156: (dbg) Run:  kubectl --context flannel-20201113233532-7409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:175: (dbg) Run:  kubectl --context flannel-20201113233532-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:188: (dbg) Run:  kubectl --context flannel-20201113233532-7409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (229.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20201113234035-7409 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.20.0-beta.1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20201113234035-7409 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.20.0-beta.1: (3m49.672814143s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (229.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context old-k8s-version-20201113234030-7409 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [29991efb-260a-11eb-961d-ac1f11e7ad42] Pending
helpers_test.go:333: "busybox" [29991efb-260a-11eb-961d-ac1f11e7ad42] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:333: "busybox" [29991efb-260a-11eb-961d-ac1f11e7ad42] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 13.053378563s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context old-k8s-version-20201113234030-7409 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (14.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (14.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20201113234035-7409 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20201113234035-7409 --alsologtostderr -v=3: (14.260907853s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (14.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context embed-certs-20201113234031-7409 create -f testdata/busybox.yaml
start_stop_delete_test.go:163: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [b0bd26ec-40ae-4f1f-9d9c-8638c9cebe2b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:333: "busybox" [b0bd26ec-40ae-4f1f-9d9c-8638c9cebe2b] Running
start_stop_delete_test.go:163: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.107511408s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context embed-certs-20201113234031-7409 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20201113234031-7409 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20201113234031-7409 --alsologtostderr -v=3: (8.22167872s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20201113234035-7409 -n newest-cni-20201113234035-7409
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20201113234035-7409 -n newest-cni-20201113234035-7409: exit status 7 (161.151234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20201113234035-7409
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20201113234030-7409 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20201113234030-7409 --alsologtostderr -v=3: (7.293610385s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (7.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (59.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20201113234035-7409 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.20.0-beta.1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20201113234035-7409 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.20.0-beta.1: (59.276106336s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20201113234035-7409 -n newest-cni-20201113234035-7409
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (59.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20201113234030-7409 -n old-k8s-version-20201113234030-7409
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20201113234030-7409 -n old-k8s-version-20201113234030-7409: exit status 7 (113.399836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20201113234030-7409
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (129.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20201113234030-7409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=kvm2  --kubernetes-version=v1.13.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20201113234030-7409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --container-runtime=docker --driver=kvm2  --kubernetes-version=v1.13.0: (2m9.386216323s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20201113234030-7409 -n old-k8s-version-20201113234030-7409
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (129.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20201113234031-7409 -n embed-certs-20201113234031-7409
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20201113234031-7409 -n embed-certs-20201113234031-7409: exit status 7 (117.862179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20201113234031-7409
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (130.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20201113234031-7409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.19.4

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20201113234031-7409 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.19.4: (2m9.947450502s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20201113234031-7409 -n embed-certs-20201113234031-7409
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (130.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:212: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:223: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20201113234035-7409 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201113225438-7409
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20201113234035-7409 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-20201113234035-7409 --alsologtostderr -v=1: (2.220914475s)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20201113234035-7409 -n newest-cni-20201113234035-7409
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20201113234035-7409 -n newest-cni-20201113234035-7409: exit status 2 (405.019452ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20201113234035-7409 -n newest-cni-20201113234035-7409
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20201113234035-7409 -n newest-cni-20201113234035-7409: exit status 2 (342.461714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20201113234035-7409 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-20201113234035-7409 --alsologtostderr -v=1: (1.495626451s)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20201113234035-7409 -n newest-cni-20201113234035-7409
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20201113234035-7409 -n newest-cni-20201113234035-7409
--- PASS: TestStartStop/group/newest-cni/serial/Pause (5.42s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/FirstStart (120.9s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Run:  out/minikube-linux-amd64 start -p containerd-20201113234547-7409 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.19.4

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/FirstStart
start_stop_delete_test.go:154: (dbg) Done: out/minikube-linux-amd64 start -p containerd-20201113234547-7409 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.19.4: (2m0.903986475s)
--- PASS: TestStartStop/group/containerd/serial/FirstStart (120.90s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/DeployApp (14.47s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context crio-20201113234030-7409 create -f testdata/busybox.yaml
start_stop_delete_test.go:163: (dbg) TestStartStop/group/crio/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [5d14afba-9944-433e-818d-c5969fd23efc] Pending
helpers_test.go:333: "busybox" [5d14afba-9944-433e-818d-c5969fd23efc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:333: "busybox" [5d14afba-9944-433e-818d-c5969fd23efc] Running
start_stop_delete_test.go:163: (dbg) TestStartStop/group/crio/serial/DeployApp: integration-test=busybox healthy within 13.076369039s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context crio-20201113234030-7409 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/crio/serial/DeployApp (14.47s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/Stop (4.23s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p crio-20201113234030-7409 --alsologtostderr -v=3
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p crio-20201113234030-7409 --alsologtostderr -v=3: (4.229251079s)
--- PASS: TestStartStop/group/crio/serial/Stop (4.23s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201113234030-7409 -n crio-20201113234030-7409
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201113234030-7409 -n crio-20201113234030-7409: exit status 7 (109.649334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p crio-20201113234030-7409
--- PASS: TestStartStop/group/crio/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/SecondStart (200.34s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p crio-20201113234030-7409 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=kvm2  --kubernetes-version=v1.15.7

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p crio-20201113234030-7409 --memory=2200 --alsologtostderr --wait=true --container-runtime=crio --disable-driver-mounts --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --driver=kvm2  --kubernetes-version=v1.15.7: (3m19.995119101s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p crio-20201113234030-7409 -n crio-20201113234030-7409
--- PASS: TestStartStop/group/crio/serial/SecondStart (200.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-66766c77dc-2ptbm" [642aaf34-260a-11eb-98a8-ac1f11e7ad42] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.060566187s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-584f46694c-vgqrq" [bcb0a7ed-8c0c-4066-919d-e44690b9329d] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.035958174s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-66766c77dc-2ptbm" [642aaf34-260a-11eb-98a8-ac1f11e7ad42] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016684872s
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-584f46694c-vgqrq" [bcb0a7ed-8c0c-4066-919d-e44690b9329d] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018072289s
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20201113234030-7409 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201113225438-7409
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20201113234030-7409 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:238: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-20201113234030-7409 --alsologtostderr -v=1: (2.356731693s)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20201113234030-7409 -n old-k8s-version-20201113234030-7409
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20201113234030-7409 -n old-k8s-version-20201113234030-7409: exit status 2 (446.874389ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20201113234030-7409 -n old-k8s-version-20201113234030-7409

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20201113234030-7409 -n old-k8s-version-20201113234030-7409: exit status 2 (444.427841ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20201113234030-7409 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:238: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-20201113234030-7409 --alsologtostderr -v=1: (1.68874622s)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20201113234030-7409 -n old-k8s-version-20201113234030-7409

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20201113234030-7409 -n old-k8s-version-20201113234030-7409
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (6.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20201113234031-7409 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:232: Found non-minikube image: minikube-local-cache-test:functional-20201113225438-7409
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20201113234031-7409 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:238: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-20201113234031-7409 --alsologtostderr -v=1: (2.148963598s)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20201113234031-7409 -n embed-certs-20201113234031-7409
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20201113234031-7409 -n embed-certs-20201113234031-7409: exit status 2 (418.565087ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20201113234031-7409 -n embed-certs-20201113234031-7409

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20201113234031-7409 -n embed-certs-20201113234031-7409: exit status 2 (384.457499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20201113234031-7409 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:238: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-20201113234031-7409 --alsologtostderr -v=1: (1.838242714s)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20201113234031-7409 -n embed-certs-20201113234031-7409

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20201113234031-7409 -n embed-certs-20201113234031-7409
--- PASS: TestStartStop/group/embed-certs/serial/Pause (6.01s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/DeployApp (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/DeployApp
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context containerd-20201113234547-7409 create -f testdata/busybox.yaml
start_stop_delete_test.go:163: (dbg) TestStartStop/group/containerd/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:333: "busybox" [aa2dbd37-739d-4e60-a7b8-5f1e1d650a65] Pending

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/DeployApp
helpers_test.go:333: "busybox" [aa2dbd37-739d-4e60-a7b8-5f1e1d650a65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/DeployApp
helpers_test.go:333: "busybox" [aa2dbd37-739d-4e60-a7b8-5f1e1d650a65] Running

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/DeployApp
start_stop_delete_test.go:163: (dbg) TestStartStop/group/containerd/serial/DeployApp: integration-test=busybox healthy within 12.058825202s
start_stop_delete_test.go:163: (dbg) Run:  kubectl --context containerd-20201113234547-7409 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/containerd/serial/DeployApp (13.15s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/Stop (92.94s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/Stop
start_stop_delete_test.go:169: (dbg) Run:  out/minikube-linux-amd64 stop -p containerd-20201113234547-7409 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/Stop
start_stop_delete_test.go:169: (dbg) Done: out/minikube-linux-amd64 stop -p containerd-20201113234547-7409 --alsologtostderr -v=3: (1m32.939134219s)
--- PASS: TestStartStop/group/containerd/serial/Stop (92.94s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/crio/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-5ddb79bb9f-ndvzw" [fb40b588-4533-4fa9-a315-47b33b420ed6] Running

                                                
                                                
=== CONT  TestStartStop/group/crio/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/crio/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024486054s
--- PASS: TestStartStop/group/crio/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/EnableAddonAfterStop
start_stop_delete_test.go:179: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20201113234547-7409 -n containerd-20201113234547-7409
start_stop_delete_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20201113234547-7409 -n containerd-20201113234547-7409: exit status 7 (101.852376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:179: status error: exit status 7 (may be ok)
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p containerd-20201113234547-7409
--- PASS: TestStartStop/group/containerd/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/SecondStart (106.97s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Run:  out/minikube-linux-amd64 start -p containerd-20201113234547-7409 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.19.4

                                                
                                                
=== CONT  TestStartStop/group/containerd/serial/SecondStart
start_stop_delete_test.go:195: (dbg) Done: out/minikube-linux-amd64 start -p containerd-20201113234547-7409 --memory=2200 --alsologtostderr --wait=true --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.19.4: (1m46.616588664s)
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p containerd-20201113234547-7409 -n containerd-20201113234547-7409
--- PASS: TestStartStop/group/containerd/serial/SecondStart (106.97s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/AddonExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/crio/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-5ddb79bb9f-ndvzw" [fb40b588-4533-4fa9-a315-47b33b420ed6] Running
start_stop_delete_test.go:224: (dbg) TestStartStop/group/crio/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015490488s
--- PASS: TestStartStop/group/crio/serial/AddonExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/crio/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/crio/serial/Pause
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 pause -p crio-20201113234030-7409 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Done: out/minikube-linux-amd64 pause -p crio-20201113234030-7409 --alsologtostderr -v=1: (1.210872867s)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201113234030-7409 -n crio-20201113234030-7409
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201113234030-7409 -n crio-20201113234030-7409: exit status 2 (340.023471ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20201113234030-7409 -n crio-20201113234030-7409
start_stop_delete_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20201113234030-7409 -n crio-20201113234030-7409: exit status 2 (343.077683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:238: status error: exit status 2 (may be ok)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 unpause -p crio-20201113234030-7409 --alsologtostderr -v=1
start_stop_delete_test.go:238: (dbg) Done: out/minikube-linux-amd64 unpause -p crio-20201113234030-7409 --alsologtostderr -v=1: (1.068447591s)
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p crio-20201113234030-7409 -n crio-20201113234030-7409
start_stop_delete_test.go:238: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p crio-20201113234030-7409 -n crio-20201113234030-7409
--- PASS: TestStartStop/group/crio/serial/Pause (3.86s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/UserAppExistsAfterStop (164.02s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/UserAppExistsAfterStop
start_stop_delete_test.go:213: (dbg) TestStartStop/group/containerd/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-584f46694c-h658h" [9f95122d-b152-42ee-9702-c430fad33a90] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:333: "kubernetes-dashboard-584f46694c-h658h" [9f95122d-b152-42ee-9702-c430fad33a90] Running
start_stop_delete_test.go:213: (dbg) TestStartStop/group/containerd/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 2m44.019250188s
--- PASS: TestStartStop/group/containerd/serial/UserAppExistsAfterStop (164.02s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/AddonExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/AddonExistsAfterStop
start_stop_delete_test.go:224: (dbg) TestStartStop/group/containerd/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:333: "kubernetes-dashboard-584f46694c-h658h" [9f95122d-b152-42ee-9702-c430fad33a90] Running
start_stop_delete_test.go:224: (dbg) TestStartStop/group/containerd/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015016219s
--- PASS: TestStartStop/group/containerd/serial/AddonExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/containerd/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/containerd/serial/VerifyKubernetesImages
start_stop_delete_test.go:232: (dbg) Run:  out/minikube-linux-amd64 ssh -p containerd-20201113234547-7409 "sudo crictl images -o json"
start_stop_delete_test.go:232: Found non-minikube image: kindest/kindnetd:0.5.4
start_stop_delete_test.go:232: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:232: Found non-minikube image: library/minikube-local-cache-test:functional-20201113225438-7409
start_stop_delete_test.go:232: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/containerd/serial/VerifyKubernetesImages (0.29s)

                                                
                                    

Test skip (7/172)

skiped test Duration
TestDownloadOnlyKic 0
TestAddons/parallel/Olm 0
TestHyperKitDriverInstallOrUpdate 0
TestHyperkitDriverSkipUpgrade 0
TestChangeNoneUser 0
TestInsufficientStorage 0
TestMissingContainerUpgrade 0
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:156: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:360: Skipping olm test till this timeout issue is solved https://github.com/operator-framework/operator-lifecycle-manager/issues/1534#issuecomment-632342257
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:110: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:182: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:37: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:234: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard