=== RUN TestErrorSpam
=== PAUSE TestErrorSpam
=== CONT TestErrorSpam
error_spam_test.go:62: (dbg) Run: out/minikube-linux-amd64 start -p nospam-20201113231417-7409 -n=1 --memory=2250 --wait=false --driver=kvm2
=== CONT TestErrorSpam
error_spam_test.go:62: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20201113231417-7409 -n=1 --memory=2250 --wait=false --driver=kvm2 : (1m23.298104544s)
error_spam_test.go:77: unexpected stderr: "! Unable to update kvm2 driver: download: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.15.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.15.0/docker-machine-driver-kvm2.sha256 Dst:/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube/bin/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8] Decompressors:map[bz2:0x2ac93c8 gz:0x2ac93c8 tar.bz2:0x2ac93c8 tar.gz:0x2ac93c8 tar.xz:0x2ac93c8 tbz2:0x2ac93c8 tgz:0x2ac93c8 txz:0x2ac93c8 xz:0x2ac93c8 zip:0x2ac93c8] Getters:map[file:0xc0004995b0 http:0xc0004647e0 https:0xc000464800] Dir:false ProgressListener:0x2a8d3c0 Options:[0xcd96e0]}: invalid checksum: Error downloading checksum file: bad response code: 404"
error_spam_test.go:91: minikube stdout:
* [nospam-20201113231417-7409] minikube v1.15.0 on Debian 9.13
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/kubeconfig
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube
- MINIKUBE_LOCATION=9698
* Using the kvm2 driver based on user configuration
* Downloading driver docker-machine-driver-kvm2:
* Starting control plane node nospam-20201113231417-7409 in cluster nospam-20201113231417-7409
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-20201113231417-7409" cluster and "default" namespace by default
error_spam_test.go:92: minikube stderr:
! Unable to update kvm2 driver: download: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.15.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.15.0/docker-machine-driver-kvm2.sha256 Dst:/home/jenkins/minikube-integration/linux-amd64-kvm2-9698-3065-72ae9c24a6567fed6f66704b6e0b773ea4700fb6/.minikube/bin/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8 0x2ac93c8] Decompressors:map[bz2:0x2ac93c8 gz:0x2ac93c8 tar.bz2:0x2ac93c8 tar.gz:0x2ac93c8 tar.xz:0x2ac93c8 tbz2:0x2ac93c8 tgz:0x2ac93c8 txz:0x2ac93c8 xz:0x2ac93c8 zip:0x2ac93c8] Getters:map[file:0xc0004995b0 http:0xc0004647e0 https:0xc000464800] Dir:false ProgressListener:0x2a8d3c0 Options:[0xcd96e0]}: invalid checksum: Error downloading checksum file: bad response code: 404
error_spam_test.go:94: *** TestErrorSpam FAILED at 2020-11-13 23:15:40.467310404 +0000 UTC m=+1845.400695659
helpers_test.go:216: -----------------------post-mortem--------------------------------
helpers_test.go:233: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p nospam-20201113231417-7409 -n nospam-20201113231417-7409
=== CONT TestErrorSpam
helpers_test.go:238: <<< TestErrorSpam FAILED: start of post-mortem logs <<<
helpers_test.go:239: ======> post-mortem[TestErrorSpam]: minikube logs <======
helpers_test.go:241: (dbg) Run: out/minikube-linux-amd64 -p nospam-20201113231417-7409 logs -n 25
helpers_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p nospam-20201113231417-7409 logs -n 25: (2.853965439s)
helpers_test.go:246: TestErrorSpam logs:
-- stdout --
* ==> Docker <==
* -- Logs begin at Fri 2020-11-13 23:14:32 UTC, end at Fri 2020-11-13 23:15:42 UTC. --
* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.481211381Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618172724Z" level=warning msg="Your kernel does not support cgroup blkio weight"
* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618349496Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618463600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618480886Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618495000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.618509257Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.619069990Z" level=info msg="Loading containers: start."
* Nov 13 23:14:56 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:56.930508466Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* Nov 13 23:14:57 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:57.049560583Z" level=info msg="Loading containers: done."
* Nov 13 23:14:57 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:57.092188344Z" level=info msg="Docker daemon" commit=4484c46 graphdriver(s)=overlay2 version=19.03.13
* Nov 13 23:14:57 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:57.093269348Z" level=info msg="Daemon has completed initialization"
* Nov 13 23:14:57 nospam-20201113231417-7409 systemd[1]: Started Docker Application Container Engine.
* Nov 13 23:14:57 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:57.126120126Z" level=info msg="API listen on /var/run/docker.sock"
* Nov 13 23:14:57 nospam-20201113231417-7409 dockerd[2331]: time="2020-11-13T23:14:57.132121223Z" level=info msg="API listen on [::]:2376"
* Nov 13 23:15:13 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:13.169179598Z" level=info msg="shim containerd-shim started" address=/containerd-shim/7ae2c145f733493a8f8313b396588f3c0fc6e97942c4c5217b6e0c5a3764fbf5.sock debug=false pid=3203
* Nov 13 23:15:13 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:13.178795893Z" level=info msg="shim containerd-shim started" address=/containerd-shim/c2b95fdf0b7fa608ac096b551a14f575fc027e9bef8c666cb67a86856798d00a.sock debug=false pid=3208
* Nov 13 23:15:13 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:13.192600842Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a0b5effd9ad5455a1147b29b8254ac61a6920136567895b001d73a9661c27393.sock debug=false pid=3223
* Nov 13 23:15:13 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:13.234005383Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6ed9aa07c4c409f8656f0885012b297e6526d028242e2f067e4905d0a55e50c2.sock debug=false pid=3241
* Nov 13 23:15:14 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:14.954245779Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1d4033cf40491146328f0fbd1e7f65208ef73ba2107004fcb89d720d61011305.sock debug=false pid=3368
* Nov 13 23:15:15 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:15.017987633Z" level=info msg="shim containerd-shim started" address=/containerd-shim/06776ea32fc85206f2d67be754730eda2baacab5b70e67525fe1ec0c3a9bc48d.sock debug=false pid=3392
* Nov 13 23:15:15 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:15.046811182Z" level=info msg="shim containerd-shim started" address=/containerd-shim/0ea21170c2c27edb0ebb70e389d0b842f91abd4fef960a8a5e0c4dc47ca3e4e7.sock debug=false pid=3405
* Nov 13 23:15:15 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:15.060214842Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e51d85317c948a2bd46827f686c8bdfc933025905884113b3af8cc8365ff92c4.sock debug=false pid=3411
* Nov 13 23:15:39 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:39.707088708Z" level=info msg="shim containerd-shim started" address=/containerd-shim/fa65780bf6719f7a366709db8706ec99895d36b1dd458160caee31d718972fc5.sock debug=false pid=4151
* Nov 13 23:15:40 nospam-20201113231417-7409 dockerd[2338]: time="2020-11-13T23:15:40.965920991Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e3c0577cc68f43d2fd2c3425280e88b4bd4aed7ae064f45eeefee46202e829bb.sock debug=false pid=4210
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* 80c95993ba3fe 635b36f4d89f0 2 seconds ago Running kube-proxy 0 537e295d8dba5
* 1ff3439160c79 0369cf4303ffd 28 seconds ago Running etcd 0 395edc111b799
* 17cd449abd5f5 4830ab6185860 28 seconds ago Running kube-controller-manager 0 3c808a58df490
* 7f3cd02c17792 14cd22f7abe78 28 seconds ago Running kube-scheduler 0 d2f24a972219e
* a39cea87e3ea6 b15c6247777d7 28 seconds ago Running kube-apiserver 0 8a82ec56f46a5
*
* ==> describe nodes <==
* Name: nospam-20201113231417-7409
* Roles: master
* Labels: beta.kubernetes.io/arch=amd64
* beta.kubernetes.io/os=linux
* kubernetes.io/arch=amd64
* kubernetes.io/hostname=nospam-20201113231417-7409
* kubernetes.io/os=linux
* minikube.k8s.io/commit=f1624ef53a2521d2c375e24d59fe2d2c53b4ded0
* minikube.k8s.io/name=nospam-20201113231417-7409
* minikube.k8s.io/updated_at=2020_11_13T23_15_33_0700
* minikube.k8s.io/version=v1.15.0
* node-role.kubernetes.io/master=
* Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
* node.alpha.kubernetes.io/ttl: 0
* volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp: Fri, 13 Nov 2020 23:15:26 +0000
* Taints: node.kubernetes.io/not-ready:NoSchedule
* Unschedulable: false
* Lease:
* HolderIdentity: nospam-20201113231417-7409
* AcquireTime: <unset>
* RenewTime: Fri, 13 Nov 2020 23:15:34 +0000
* Conditions:
* Type Status LastHeartbeatTime LastTransitionTime Reason Message
* ---- ------ ----------------- ------------------ ------ -------
* MemoryPressure False Fri, 13 Nov 2020 23:15:35 +0000 Fri, 13 Nov 2020 23:15:18 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
* DiskPressure False Fri, 13 Nov 2020 23:15:35 +0000 Fri, 13 Nov 2020 23:15:18 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
* PIDPressure False Fri, 13 Nov 2020 23:15:35 +0000 Fri, 13 Nov 2020 23:15:18 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
* Ready False Fri, 13 Nov 2020 23:15:35 +0000 Fri, 13 Nov 2020 23:15:18 +0000 KubeletNotReady container runtime status check may not have completed yet
* Addresses:
* InternalIP: 192.168.39.66
* Hostname: nospam-20201113231417-7409
* Capacity:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2133072Ki
* pods: 110
* Allocatable:
* cpu: 2
* ephemeral-storage: 16954224Ki
* hugepages-2Mi: 0
* memory: 2133072Ki
* pods: 110
* System Info:
* Machine ID: fa00240fb79643e7be3cff40db3720b8
* System UUID: fa00240f-b796-43e7-be3c-ff40db3720b8
* Boot ID: fe35aa2d-a5d6-4e4c-a4bd-c2ce2e55438d
* Kernel Version: 4.19.150
* OS Image: Buildroot 2020.02.7
* Operating System: linux
* Architecture: amd64
* Container Runtime Version: docker://19.3.13
* Kubelet Version: v1.19.4
* Kube-Proxy Version: v1.19.4
* Non-terminated Pods: (5 in total)
* Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
* --------- ---- ------------ ---------- --------------- ------------- ---
* kube-system etcd-nospam-20201113231417-7409 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6s
* kube-system kube-apiserver-nospam-20201113231417-7409 250m (12%) 0 (0%) 0 (0%) 0 (0%) 6s
* kube-system kube-controller-manager-nospam-20201113231417-7409 200m (10%) 0 (0%) 0 (0%) 0 (0%) 6s
* kube-system kube-proxy-ksrvl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4s
* kube-system kube-scheduler-nospam-20201113231417-7409 100m (5%) 0 (0%) 0 (0%) 0 (0%) 6s
* Allocated resources:
* (Total limits may be over 100 percent, i.e., overcommitted.)
* Resource Requests Limits
* -------- -------- ------
* cpu 550m (27%) 0 (0%)
* memory 0 (0%) 0 (0%)
* ephemeral-storage 0 (0%) 0 (0%)
* hugepages-2Mi 0 (0%) 0 (0%)
* Events:
* Type Reason Age From Message
* ---- ------ ---- ---- -------
* Normal NodeHasSufficientMemory 30s (x8 over 31s) kubelet Node nospam-20201113231417-7409 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 30s (x7 over 31s) kubelet Node nospam-20201113231417-7409 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 30s (x7 over 31s) kubelet Node nospam-20201113231417-7409 status is now: NodeHasSufficientPID
* Normal Starting 8s kubelet Starting kubelet.
* Normal NodeHasSufficientMemory 7s kubelet Node nospam-20201113231417-7409 status is now: NodeHasSufficientMemory
* Normal NodeHasNoDiskPressure 7s kubelet Node nospam-20201113231417-7409 status is now: NodeHasNoDiskPressure
* Normal NodeHasSufficientPID 7s kubelet Node nospam-20201113231417-7409 status is now: NodeHasSufficientPID
* Normal NodeAllocatableEnforced 6s kubelet Updated Node Allocatable limit across pods
* Normal Starting 1s kube-proxy Starting kube-proxy.
*
* ==> dmesg <==
* [Nov13 23:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
* [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
* [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
* [ +0.148145] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [ +4.849304] Unstable clock detected, switching default tracing clock to "global"
* If you want to keep using the local clock, then add:
* "trace_clock=local"
* on the kernel command line
* [ +0.000319] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
* [ +4.186958] systemd-fstab-generator[1155]: Ignoring "noauto" for root device
* [ +0.078704] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
* [ +0.000005] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
* [ +1.536016] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1711 comm=systemd-network
* [ +1.922260] vboxguest: loading out-of-tree module taints kernel.
* [ +0.008181] vboxguest: PCI device not found, probably running on physical hardware.
* [ +1.395542] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
* [ +3.321535] systemd-fstab-generator[2071]: Ignoring "noauto" for root device
* [ +0.250924] systemd-fstab-generator[2084]: Ignoring "noauto" for root device
* [ +14.415162] systemd-fstab-generator[2317]: Ignoring "noauto" for root device
* [ +2.557742] kauditd_printk_skb: 68 callbacks suppressed
* [ +0.673720] systemd-fstab-generator[2484]: Ignoring "noauto" for root device
* [Nov13 23:15] systemd-fstab-generator[2735]: Ignoring "noauto" for root device
* [ +2.118673] kauditd_printk_skb: 107 callbacks suppressed
* [ +20.776067] systemd-fstab-generator[3854]: Ignoring "noauto" for root device
* [ +9.868908] kauditd_printk_skb: 38 callbacks suppressed
*
* ==> etcd [1ff3439160c7] <==
* 2020-11-13 23:15:19.821222 I | embed: listening for metrics on http://127.0.0.1:2381
* 2020-11-13 23:15:19.822338 I | etcdserver: 4805191378a47515 as single-node; fast-forwarding 9 ticks (election ticks 10)
* 2020-11-13 23:15:19.823578 I | embed: listening for peers on 192.168.39.66:2380
* raft2020/11/13 23:15:19 INFO: 4805191378a47515 switched to configuration voters=(5189581717033481493)
* 2020-11-13 23:15:19.825544 I | etcdserver/membership: added member 4805191378a47515 [https://192.168.39.66:2380] to cluster e2b8cbf3a588a126
* raft2020/11/13 23:15:20 INFO: 4805191378a47515 is starting a new election at term 1
* raft2020/11/13 23:15:20 INFO: 4805191378a47515 became candidate at term 2
* raft2020/11/13 23:15:20 INFO: 4805191378a47515 received MsgVoteResp from 4805191378a47515 at term 2
* raft2020/11/13 23:15:20 INFO: 4805191378a47515 became leader at term 2
* raft2020/11/13 23:15:20 INFO: raft.node: 4805191378a47515 elected leader 4805191378a47515 at term 2
* 2020-11-13 23:15:20.168528 I | etcdserver: setting up the initial cluster version to 3.4
* 2020-11-13 23:15:20.183997 N | etcdserver/membership: set the initial cluster version to 3.4
* 2020-11-13 23:15:20.184621 I | etcdserver/api: enabled capabilities for version 3.4
* 2020-11-13 23:15:20.185066 I | etcdserver: published {Name:nospam-20201113231417-7409 ClientURLs:[https://192.168.39.66:2379]} to cluster e2b8cbf3a588a126
* 2020-11-13 23:15:20.185502 I | embed: ready to serve client requests
* 2020-11-13 23:15:20.186630 I | embed: ready to serve client requests
* 2020-11-13 23:15:20.189773 I | embed: serving client requests on 192.168.39.66:2379
* 2020-11-13 23:15:20.189957 I | embed: serving client requests on 127.0.0.1:2379
* 2020-11-13 23:15:27.079920 I | etcdserver/api/etcdhttp: /health OK (status code 200)
* 2020-11-13 23:15:27.082947 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:0 size:4" took too long (199.071742ms) to execute
* 2020-11-13 23:15:27.084113 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-nospam-20201113231417-7409\" " with result "range_response_count:0 size:4" took too long (205.348852ms) to execute
* 2020-11-13 23:15:27.085095 W | etcdserver: read-only range request "key:\"/registry/csinodes/nospam-20201113231417-7409\" " with result "range_response_count:0 size:4" took too long (208.386882ms) to execute
* 2020-11-13 23:15:27.121497 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/nospam-20201113231417-7409\" " with result "range_response_count:0 size:4" took too long (235.210385ms) to execute
* 2020-11-13 23:15:27.122335 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (236.407824ms) to execute
* 2020-11-13 23:15:27.123004 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:0 size:4" took too long (237.134886ms) to execute
*
* ==> kernel <==
* 23:15:43 up 1 min, 0 users, load average: 3.36, 0.91, 0.31
* Linux nospam-20201113231417-7409 4.19.150 #1 SMP Fri Nov 6 15:58:07 PST 2020 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2020.02.7"
*
* ==> kube-apiserver [a39cea87e3ea] <==
* I1113 23:15:26.461200 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
* I1113 23:15:26.461510 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I1113 23:15:26.461768 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* E1113 23:15:26.620486 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.66, ResourceVersion: 0, AdditionalErrorMsg:
* I1113 23:15:26.747505 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
* I1113 23:15:26.756749 1 cache.go:39] Caches are synced for AvailableConditionController controller
* I1113 23:15:26.777686 1 shared_informer.go:247] Caches are synced for crd-autoregister
* I1113 23:15:26.842105 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
* I1113 23:15:26.851948 1 cache.go:39] Caches are synced for autoregister controller
* I1113 23:15:27.435196 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
* I1113 23:15:27.435485 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
* I1113 23:15:27.503268 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
* I1113 23:15:27.561789 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
* I1113 23:15:27.562344 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
* I1113 23:15:29.476202 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
* I1113 23:15:29.664897 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
* W1113 23:15:30.027247 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.66]
* I1113 23:15:30.032191 1 controller.go:606] quota admission added evaluator for: endpoints
* I1113 23:15:30.064288 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
* I1113 23:15:31.135301 1 controller.go:606] quota admission added evaluator for: serviceaccounts
* I1113 23:15:32.825790 1 controller.go:606] quota admission added evaluator for: deployments.apps
* I1113 23:15:33.056179 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
* I1113 23:15:34.778117 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
* I1113 23:15:38.516770 1 controller.go:606] quota admission added evaluator for: replicasets.apps
* I1113 23:15:38.627889 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
*
* ==> kube-controller-manager [17cd449abd5f] <==
* I1113 23:15:38.327243 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
* I1113 23:15:38.328494 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
* I1113 23:15:38.333819 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
* I1113 23:15:38.337949 1 shared_informer.go:247] Caches are synced for ReplicaSet
* I1113 23:15:38.429224 1 shared_informer.go:247] Caches are synced for disruption
* I1113 23:15:38.429269 1 disruption.go:339] Sending events to api server.
* I1113 23:15:38.481246 1 shared_informer.go:247] Caches are synced for deployment
* I1113 23:15:38.482086 1 shared_informer.go:247] Caches are synced for resource quota
* I1113 23:15:38.483287 1 shared_informer.go:247] Caches are synced for daemon sets
* I1113 23:15:38.534192 1 shared_informer.go:247] Caches are synced for taint
* I1113 23:15:38.537038 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
* W1113 23:15:38.538187 1 node_lifecycle_controller.go:1044] Missing timestamp for Node nospam-20201113231417-7409. Assuming now as a timestamp.
* I1113 23:15:38.538614 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
* I1113 23:15:38.544169 1 taint_manager.go:187] Starting NoExecuteTaintManager
* I1113 23:15:38.552049 1 shared_informer.go:247] Caches are synced for resource quota
* I1113 23:15:38.575892 1 event.go:291] "Event occurred" object="nospam-20201113231417-7409" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node nospam-20201113231417-7409 event: Registered Node nospam-20201113231417-7409 in Controller"
* I1113 23:15:38.593764 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
* I1113 23:15:38.625085 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1"
* I1113 23:15:38.794084 1 shared_informer.go:247] Caches are synced for garbage collector
* I1113 23:15:38.795152 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I1113 23:15:38.794983 1 shared_informer.go:247] Caches are synced for garbage collector
* I1113 23:15:38.866342 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ksrvl"
* I1113 23:15:38.880580 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-ng68r"
* E1113 23:15:39.043766 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"37e1abbb-2de0-4393-9742-508a981aa8e2", ResourceVersion:"237", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740906133, loc:(*time.Location)(0x6a61c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001be3640), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0xc001be3660)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001be3680), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)
(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001bf4d80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v
1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001be36a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersi
stentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001be36c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.
DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.4", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"",
ValueFrom:(*v1.EnvVarSource)(0xc001be3700)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001afdd40), Stdin:false, StdinOnce:false, TTY:false}},
EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c849f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00041f5e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConf
ig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00055ba78)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001c84a48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
* E1113 23:15:39.260343 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"37e1abbb-2de0-4393-9742-508a981aa8e2", ResourceVersion:"352", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740906133, loc:(*time.Location)(0x6a61c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000c22000), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0xc000c22020)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000c22040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000c22060)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000c22080), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistent
DiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001b02340), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.S
caleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000c220a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolum
eSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000c220c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBD
VolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.4", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf
", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000c22100)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMes
sagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001afc420), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0002458b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000f7dc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operat
or:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001c2e618)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000245978)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object h
as been modified; please apply your changes to the latest version and try again
*
* ==> kube-proxy [80c95993ba3f] <==
* I1113 23:15:41.525265 1 node.go:136] Successfully retrieved node IP: 192.168.39.66
* I1113 23:15:41.525652 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.39.66), assume IPv4 operation
* W1113 23:15:41.752595 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
* I1113 23:15:41.752872 1 server_others.go:186] Using iptables Proxier.
* W1113 23:15:41.752967 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
* I1113 23:15:41.753074 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
* I1113 23:15:41.754136 1 server.go:650] Version: v1.19.4
* I1113 23:15:41.755571 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I1113 23:15:41.755859 1 conntrack.go:52] Setting nf_conntrack_max to 131072
* I1113 23:15:41.756614 1 conntrack.go:83] Setting conntrack hashsize to 32768
* I1113 23:15:41.761729 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I1113 23:15:41.761799 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I1113 23:15:41.763752 1 config.go:315] Starting service config controller
* I1113 23:15:41.763865 1 shared_informer.go:240] Waiting for caches to sync for service config
* I1113 23:15:41.764009 1 config.go:224] Starting endpoint slice config controller
* I1113 23:15:41.764018 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
* I1113 23:15:41.864503 1 shared_informer.go:247] Caches are synced for endpoint slice config
* I1113 23:15:41.864709 1 shared_informer.go:247] Caches are synced for service config
*
* ==> kube-scheduler [7f3cd02c1779] <==
* E1113 23:15:26.854689 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E1113 23:15:26.854798 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E1113 23:15:26.854918 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E1113 23:15:26.855020 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1113 23:15:26.855101 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1113 23:15:26.855217 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E1113 23:15:26.855329 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E1113 23:15:26.859571 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* E1113 23:15:26.867896 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E1113 23:15:26.868080 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1113 23:15:26.868967 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E1113 23:15:27.799223 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E1113 23:15:27.809293 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E1113 23:15:27.870095 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E1113 23:15:27.878614 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
* E1113 23:15:27.921326 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
* E1113 23:15:28.043648 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1113 23:15:28.072321 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E1113 23:15:28.118329 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E1113 23:15:28.121293 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E1113 23:15:28.141069 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E1113 23:15:28.147313 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E1113 23:15:28.245464 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E1113 23:15:28.345647 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
* I1113 23:15:31.408985 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Fri 2020-11-13 23:14:32 UTC, end at Fri 2020-11-13 23:15:43 UTC. --
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.513972 3869 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.513993 3869 policy_none.go:43] [cpumanager] none policy: Start
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.540586 3869 plugin_manager.go:114] Starting Kubelet Plugin Manager
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.558750 3869 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.670054 3869 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.696286 3869 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.709902 3869 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.712716 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/8669de9954d4bfb59c4e5b26ffd421d9-etcd-certs") pod "etcd-nospam-20201113231417-7409" (UID: "8669de9954d4bfb59c4e5b26ffd421d9")
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.712904 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/8669de9954d4bfb59c4e5b26ffd421d9-etcd-data") pod "etcd-nospam-20201113231417-7409" (UID: "8669de9954d4bfb59c4e5b26ffd421d9")
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.814769 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/6cb144f7d82285562d6fc7ed0aeee754-k8s-certs") pod "kube-controller-manager-nospam-20201113231417-7409" (UID: "6cb144f7d82285562d6fc7ed0aeee754")
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.815202 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6cb144f7d82285562d6fc7ed0aeee754-kubeconfig") pod "kube-controller-manager-nospam-20201113231417-7409" (UID: "6cb144f7d82285562d6fc7ed0aeee754")
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.815571 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/6cb144f7d82285562d6fc7ed0aeee754-usr-share-ca-certificates") pod "kube-controller-manager-nospam-20201113231417-7409" (UID: "6cb144f7d82285562d6fc7ed0aeee754")
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.815903 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/38744c90661b22e9ae232b0452c54538-kubeconfig") pod "kube-scheduler-nospam-20201113231417-7409" (UID: "38744c90661b22e9ae232b0452c54538")
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.818573 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/552dc2aa9de19317813f734e107fb51a-ca-certs") pod "kube-apiserver-nospam-20201113231417-7409" (UID: "552dc2aa9de19317813f734e107fb51a")
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.819834 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/552dc2aa9de19317813f734e107fb51a-usr-share-ca-certificates") pod "kube-apiserver-nospam-20201113231417-7409" (UID: "552dc2aa9de19317813f734e107fb51a")
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.820317 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/552dc2aa9de19317813f734e107fb51a-k8s-certs") pod "kube-apiserver-nospam-20201113231417-7409" (UID: "552dc2aa9de19317813f734e107fb51a")
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.822298 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/6cb144f7d82285562d6fc7ed0aeee754-ca-certs") pod "kube-controller-manager-nospam-20201113231417-7409" (UID: "6cb144f7d82285562d6fc7ed0aeee754")
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.830704 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/6cb144f7d82285562d6fc7ed0aeee754-flexvolume-dir") pod "kube-controller-manager-nospam-20201113231417-7409" (UID: "6cb144f7d82285562d6fc7ed0aeee754")
* Nov 13 23:15:36 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:36.843712 3869 reconciler.go:157] Reconciler: start to sync state
* Nov 13 23:15:39 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:39.024699 3869 topology_manager.go:233] [topologymanager] Topology Admit Handler
* Nov 13 23:15:39 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:39.215881 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/71650814-b622-4a12-ad77-4348de106142-lib-modules") pod "kube-proxy-ksrvl" (UID: "71650814-b622-4a12-ad77-4348de106142")
* Nov 13 23:15:39 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:39.220857 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/71650814-b622-4a12-ad77-4348de106142-xtables-lock") pod "kube-proxy-ksrvl" (UID: "71650814-b622-4a12-ad77-4348de106142")
* Nov 13 23:15:39 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:39.224026 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/71650814-b622-4a12-ad77-4348de106142-kube-proxy") pod "kube-proxy-ksrvl" (UID: "71650814-b622-4a12-ad77-4348de106142")
* Nov 13 23:15:39 nospam-20201113231417-7409 kubelet[3869]: I1113 23:15:39.226824 3869 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-mmntq" (UniqueName: "kubernetes.io/secret/71650814-b622-4a12-ad77-4348de106142-kube-proxy-token-mmntq") pod "kube-proxy-ksrvl" (UID: "71650814-b622-4a12-ad77-4348de106142")
* Nov 13 23:15:40 nospam-20201113231417-7409 kubelet[3869]: W1113 23:15:40.646125 3869 pod_container_deletor.go:79] Container "537e295d8dba5b895e1f60092f2066c2564ebf341f0579e0b7d6e071927dd765" not found in pod's containers
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p nospam-20201113231417-7409 -n nospam-20201113231417-7409
helpers_test.go:255: (dbg) Run: kubectl --context nospam-20201113231417-7409 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: non-running pods: coredns-f9fd979d6-ng68r storage-provisioner
helpers_test.go:263: ======> post-mortem[TestErrorSpam]: describe non-running pods <======
helpers_test.go:266: (dbg) Run: kubectl --context nospam-20201113231417-7409 describe pod coredns-f9fd979d6-ng68r storage-provisioner
helpers_test.go:266: (dbg) Non-zero exit: kubectl --context nospam-20201113231417-7409 describe pod coredns-f9fd979d6-ng68r storage-provisioner: exit status 1 (144.889966ms)
** stderr **
Error from server (NotFound): pods "coredns-f9fd979d6-ng68r" not found
Error from server (NotFound): pods "storage-provisioner" not found
** /stderr **
helpers_test.go:268: kubectl --context nospam-20201113231417-7409 describe pod coredns-f9fd979d6-ng68r storage-provisioner: exit status 1
helpers_test.go:171: Cleaning up "nospam-20201113231417-7409" profile ...
helpers_test.go:172: (dbg) Run: out/minikube-linux-amd64 delete -p nospam-20201113231417-7409
helpers_test.go:172: (dbg) Done: out/minikube-linux-amd64 delete -p nospam-20201113231417-7409: (1.188552224s)
--- FAIL: TestErrorSpam (88.77s)